Search results for: superficial discovery
39 Parasitological Tracking of Wild Passerines in Group for the Rehabilitation of Native Fauna and Its Habitat
Authors: Catarina Ferreira Rebelo, Luis Madeira de Carvalho, Fernando González González
Abstract:
The order Passeridae corresponds to the richest and most abundant group of birds, with approximately 6500 species, making it possible to assert that two out of every three bird species are passerines. They are globally distributed and exhibit remarkable morphological and ecological variability. While numerous species of parasites have been identified and described in wild birds, there has been little focus on passeriformes. Seventeen passerines admitted to GREFA, a Wildlife Rehabilitation Center, throughout the months of October, November and December 2022 were analyzed. The species included Aegithalos caudatus, Anthus pratensis, Carduelis chloris, Certhia brachydactyla, Erithacus rubecula, Fringilla coelebs, Parus ater, Passer domesticus, Sturnus unicolor, Sylvia atricapilla, Turdus merula and Turdus philomelos. Data regarding past history was collected, and necropsies were conducted to identify the cause of death and body condition and determine the presence of parasites. Additionally, samples of intestinal content were collected for direct/fecal smear, flotation and sedimentation techniques. Sixteen (94.1%) passerines were considered positive for the presence of parasitic forms in at least one of the techniques used, including parasites detected in necropsy. Adult specimens of both sexes and tritonymphs of Monojoubertia microhylla and ectoparasites of the genus Ornithonyssus were identified. Macroscopic adult endoparasitic forms were also found during necropsies, including Diplotriaena sp., Serratospiculum sp. and Porrocaecum sp.. Parasitism by coccidia was observed with no sporulation. Additionally, eggs of nematodes from various genera were detected, such as Diplotriaena sp., Capillaria sp., Porrocaecum sp., Syngamus sp. and Strongyloides sp., eggs of trematodes, specifically the genus Brachylecithum and cestode oncospheres, whose genera were not identified. To our knowledge, the respiratory nematode Serratospiculum sp. found in this study is being reported for the first time in passerines in the Iberian Peninsula, along with the application of common coprological techniques for the identification of eggs in the intestinal content. The majority of parasites identified utilize intermediary hosts present in the diet of the passerines sampled. Furthermore, the discovery of certain parasites with a direct life cycle could potentially exert greater influence, particularly in specific scenarios such as within nests or during the rehabilitation process in wildlife centers. These parasites may impact intraspecific competition, increase susceptibility to predators or lead to death. However, their cost to wild birds is often not clear, as individuals can endure various parasites without significant harm. Furthermore, wild birds serve as important sources of parasites across different animal groups, including humans and other mammals. This study provides invaluable insights into the parasitic fauna of these birds, not only serving as a cornerstone for future epidemiological investigations but also enhancing our comprehension of these avian species.Keywords: birds, parasites, passerines, wild, spain
Procedia PDF Downloads 4238 Discover Your Power: A Case for Contraceptive Self-Empowerment
Authors: Oluwaseun Adeleke, Samuel Ikan, Anthony Nwala, Mopelola Raji, Fidelis Edet
Abstract:
Background: The risks associated with each pregnancy is carried almost entirely by a woman; however, the decision about whether and when to get pregnant is a subject that several others contend with her to make. The self-care concept offers women of reproductive age the opportunity to take control of their health and its determinants with or without the influence of a healthcare provider, family, and friends. DMPA-SC Self-injection (SI) is becoming the cornerstone of contraceptive self-care and has the potential to expand access and create opportunities for women to take control of their reproductive health. Methodology: To obtain insight into the influences that interfere with a woman’s capacity to make contraceptive choices independently, the Delivering Innovations in Selfcare (DISC) project conducted two intensive rounds of qualitative data collection and triangulation that included provider, client, and community mobilizer interviews, facility observations, and routine program data collection. Respondents were sampled according to a convenience sampling approach and data collected analyzed using a codebook and Atlas-TI. The research team members came together for participatory analysis workshop to explore and interpret emergent themes. Findings: Insights indicate that women are increasingly finding their voice and independently seek services to prevent a deterioration of their economic situation and achieve personal ambitions. Women who hold independent decision-making power still prefer to share decision making power with their male partners. Male partners’ influence on women’s use of family planning and self-inject was most dominant. There were examples of men’s support for women’s use of contraception to prevent unintended pregnancy, as well as men withholding support. Other men outrightly deny their partners from obtaining contraceptive services and their partners cede this sexual and reproductive health right without objection. A woman’s decision to initiate family planning is affected by myths and misconceptions, many of which have cultural and religious origins. Some tribes are known for their reluctance to use contraception and often associate stigma with the pursuit of family planning (FP) services. Information given by the provider is accepted, and, in many cases, clients cede power to providers to shape their SI user journey. A provider’s influence on a client’s decision to self-inject is reinforced by their biases and concerns. Clients are inhibited by the presence of peers during group education at the health facility. Others are motivated to seek FP services by the interest expressed by peers. There is also a growing trend in the influence of social media on FP uptake, particularly Facebook fora. Conclusion: The convenience of self-administration at home is a benefit for those that contend with various forms of social influences as well as covert users. Beyond increasing choice and reducing barriers to accessing Sexual and Reproductive Health (SRH) services, it can initiate the process of self-discovery and agency in the contraceptive user journey.Keywords: selfcare, self-empowerment, agency, DMPA-SC, contraception, family planning, influences
Procedia PDF Downloads 7137 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 13136 The SHIFT of Consumer Behavior from Fast Fashion to Slow Fashion: A Review and Research Agenda
Authors: Priya Nangia, Sanchita Bansal
Abstract:
As fashion cycles become more rapid, some segments of the fashion industry have adopted increasingly unsustainable production processes to keep up with demand and enhance profit margins. The growing threat to environmental and social wellbeing posed by unethical fast fashion practices and the need to integrate the targets of SDGs into this industry necessitates a shift in the fashion industry's unsustainable nature, which can only be accomplished in the long run if consumers support sustainable fashion by purchasing it. Fast fashion is defined as low-cost, trendy apparel that takes inspiration from the catwalk or celebrity culture and rapidly transforms it into garments at high-street stores to meet consumer demand. Given the importance of identity formation to many consumers, the desire to be “fashionable” often outweighs the desire to be ethical or sustainable. This paradox exemplifies the tension between the human drive to consume and the will to do so in moderation. Previous research suggests that there is an attitude-behavior gap when it comes to determining consumer purchasing behavior, but to the best of our knowledge, no study has analysed how to encourage customers to shift from fast to slow fashion. Against this backdrop, the aim of this study is twofold: first, to identify and examine the factors that impact consumers' decisions to engage in sustainable fashion, and second, the authors develop a comprehensive framework for conceptualizing and encouraging researchers and practitioners to foster sustainable consumer behavior. This study used a systematic approach to collect data and analyse literature. The approach included three key steps: review planning, review execution, and findings reporting. Authors identified the keywords “sustainable consumption” and “sustainable fashion” and retrieved studies from the Web of Science (WoS) (126 records) and Scopus database (449 records). To make the study more specific, the authors refined the subject area to management, business, and economics in the second step, retrieving 265 records. In the third step, the authors removed the duplicate records and manually reviewed the articles to examine their relevance to the research issue. The final 96 research articles were used to develop this study's systematic scheme. The findings indicate that societal norms, demographics, positive emotions, self-efficacy, and awareness all have an effect on customers' decisions to purchase sustainable apparel. The authors propose a framework, denoted by the acronym SHIFT, in which consumers are more likely to engage in sustainable behaviors when the message or context leverages the following factors: (s)social influence, (h)habit formation, (i)individual self, (f)feelings, emotions, and cognition, and (t)tangibility. Furthermore, the authors identify five broad challenges that encourage sustainable consumer behavior and use them to develop novel propositions. Finally, the authors discuss how the SHIFT framework can be used in practice to drive sustainable consumer behaviors. This research sought to define the boundaries of existing research while also providing new perspectives on future research, with the goal of being useful for the development and discovery of new fields of study, thereby expanding knowledge.Keywords: consumer behavior, fast fashion, sustainable consumption, sustainable fashion, systematic literature review
Procedia PDF Downloads 9035 The Côa Valley Ecosystem (Douro, Portugal) as a Cultural Landscape. Approach to the Management Challenges
Authors: Mariana Durana Pinto, Thierry Aubry, Eduarda Vieira
Abstract:
The Côa River is one of the tributaries of the Douro River, which in turn connects two Portuguese regions: Beira-Alta (Serra das Mesas, Sabugal) and Trás-os-Montes (Douro River, Vila Nova de Foz Côa). The river, which is approximately 140 kilometres in length, is surrounded by characteristic Northern-Estearn Portugal landscape. The dominant flora in the region includes olive and almond trees and vines, which provide habitat for a diverse range of native species. These include mammals such as the lynx and Iberian wolf, as well as birds of prey such as the Egyptian vulture and the griffon vulture. Additionally, herbivorous species such as red deer and roe deer also inhabit the region. However, the Vale Côa is inextricably linked with the rocky outcrops bearing the emblematic open-air Upper Palaeolithic rock art, indeed, it houses the world's largest collection of prehistoric open-air rock art, inscribed on the World Heritage list by UNESCO in 1998. From the initial discovery of the first engravings in 1991 to the present day, approximally 1,500 panels with rock art, mostly engravings and carving, but also some paintings, have been discovered, inventoried and recorded spanning from earlu Upper Paleolithic to the 20th century. The study and interpretation of the engravings and its geoarchaeological context, allow the construction of a chronological timeline of the human occupation and graphical production in this region. The area has been inhabited since the Early Palaeolithic, with human communities exploiting the diversity of the natural resources of the environment and adapting it to their needs. This led to the creation of an archaeological and historical cultural landscape.The region is currently inhabited by rural communities whose primary source of income is derived from agricultural activities, with a particular focus on olive oil and wine production, including the emblematic Vinho do Porto. Additionally, the region is distinguished by activities such as stone exploration and extraction (e.g. schist and granite quarries) and tourism. The latter has progressively assumed a role in the promotion and development of the region, primarily due to the engravings of the Côa Valley itself, as well as the Alto Douro Wine Region. Furthermore, this cultural landscape has been inscribed in the UNESCO World Heritage Site in 2001. The aforementioned factors give rise to a series of challenges and issues pertaining to the management and safeguarding of rock art on a daily basis. These include: I) the management of conflicts between cultural heritage and economic activity (between Rock art and vineyards, both classified as World Heritage Sites); II) the management of land-use planning in areas where the engravings are located (since the areas with engravings are larger than those identified as buffer zones by UNESCO); III) the absence of the legal figure of an 'archaeological park' and the need to solve this issue; IV) the management of tourist pressure and unauthorised visits; and V) the management of vandalism (as a consequence of misinformation and denial).Keywords: Douro and Côa Valleys, archaeological cultural landscapes, rock art, Douro wine, conservation challenges
Procedia PDF Downloads 1134 Innovation in PhD Training in the Interdisciplinary Research Institute
Authors: B. Shaw, K. Doherty
Abstract:
The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.Keywords: interdisciplinary, method, research student, training
Procedia PDF Downloads 20633 Revealing the Intersections: Theater, Mythology, and Cross-Cultural Psychology in Creative Expression
Authors: Nadia K. Thalji
Abstract:
In the timeless tapestry of human culture, theater, mythology, and psychology intersect to weave narratives that transcend temporal and spatial boundaries. For millennia, actors have stood as guardians of intuitive wisdom, their craft serving as a conduit for the collective unconscious. This paper embarks on a journey through the realms of creative expression, melding the insights of cross-cultural psychology with the mystical allure of serendipity and synchronicity. At the nexus of these disciplines lies the enigmatic process of active imagination, a gateway to the depths of the psyche elucidated by Jung. Within the hallowed confines of the black box theater at the Department of Performing Arts, UFRGS University in Brazil, this study unfolds. Over the span of four months, a cadre of artists embarked on a voyage of exploration, harnessing the powers of imagery, movement, sound, and dreams to birth a performance that resonated with the echoes of ancient wisdom. Drawing inspiration from the fabled Oracle of Delphi and the priestesses who once dwelled within its sacred precincts, the production delves into the liminal spaces where myth and history intertwine. Through the alchemy of storytelling, participants navigate the labyrinthine corridors of cultural memory, unraveling the threads that bind the past to the present. Central to this endeavor is the phenomenon of synchronicity, wherein seemingly disparate elements coalesce in a dance of cosmic resonance. Serendipity becomes a guiding force, leading actors and audience alike along unexpected pathways of discovery. As the boundaries between performer and spectator blur, the performance becomes a crucible wherein individual narratives merge to form a collective tapestry of shared experience. Yet, beneath the surface of spectacle lies a deeper truth: the exploration of the spiritual dimensions of artistic expression. Through intuitive inquiry and embodied practice, artists tap into reservoirs of insight that transcend rational comprehension. In the communion of minds and bodies, the stage becomes a sacred space wherein the numinous unfolds in all its ineffable glory. In essence, this paper serves as a testament to the transformative power of the creative act. Across cultures and epochs, the theater has served as a crucible wherein humanity grapples with the mysteries of existence. Through the lens of cross-cultural psychology, we glimpse the universal truths that underlie the myriad manifestations of human creativity. As we navigate the turbulent currents of modernity, the wisdom of the ancients beckons us to heed the call of the collective unconscious. In the synthesis of myth and meaning, we find solace amidst the chaos, forging connections that transcend the boundaries of time and space. And in the sacred precincts of the theater, we discover the eternal truth that art is, and always shall be, the soul's journey into the unknown.Keywords: theater, mythology, cross-cultural, synchronicity, creativity, serendipity, spiritual
Procedia PDF Downloads 5732 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 8931 Crustal Scale Seismic Surveys in Search for Gawler Craton Iron Oxide Cu-Au (IOCG) under Very Deep Cover
Authors: E. O. Okan, A. Kepic, P. Williams
Abstract:
Iron oxide copper gold (IOCG) deposits constitute important sources of copper and gold in Australia especially since the discovery of the supergiant Olympic Dam deposits in 1975. They are considered to be metasomatic expressions of large crustal-scale alteration events occasioned by intrusive actions and are associated with felsic igneous rocks in most cases, commonly potassic igneous magmatism, with the deposits ranging from ~2.2 –1.5 Ga in age. For the past two decades, geological, geochemical and potential methods have been used to identify the structures hosting these deposits follow up by drilling. Though these methods have largely been successful for shallow targets, at deeper depth due to low resolution they are limited to mapping only very large to gigantic deposits with sufficient contrast. As the search for ore-bodies under regolith cover continues due to depletion of the near surface deposits, there is a compelling need to develop new exploration technology to explore these deep seated ore-bodies within 1-4km which is the current mining depth range. Seismic reflection method represents this new technology as it offers a distinct advantage over all other geophysical techniques because of its great depth of penetration and superior spatial resolution maintained with depth. Further, in many different geological scenarios, it offers a greater ‘3D mapability’ of units within the stratigraphic boundary. Despite these superior attributes, no arguments for crustal scale seismic surveys have been proposed because there has not been a compelling argument of economic benefit to proceed with such work. For the seismic reflection method to be used at these scales (100’s to 1000’s of square km covered) the technical risks or the survey costs have to be reduced. In addition, as most IOCG deposits have large footprint due to its association with intrusions and large fault zones; we hypothesized that these deposits can be found by mainly looking for the seismic signatures of intrusions along prospective structures. In this study, we present two of such cases: - Olympic Dam and Vulcan iron-oxide copper-gold (IOCG) deposits all located in the Gawler craton, South Australia. Results from our 2D modelling experiments revealed that seismic reflection surveys using 20m geophones and 40m shot spacing as an exploration tool for locating IOCG deposit is possible even when hosted in very complex structures. The migrated sections were not only able to identify and trace various layers plus the complex structures but also show reflections around the edges of intrusive packages. The presences of such intrusions were clearly detected from 100m to 1000m depth range without losing its resolution. The modelled seismic images match the available real seismic data and have the hypothesized characteristics; thus, the seismic method seems to be a valid exploration tool to find IOCG deposits. We therefore propose that 2D seismic survey is viable for IOCG exploration as it can detect mineralised intrusive structures along known favourable corridors. This would help in reducing the exploration risk associated with locating undiscovered resources as well as conducting a life-of-mine study which will enable better development decisions at the very beginning.Keywords: crustal scale, exploration, IOCG deposit, modelling, seismic surveys
Procedia PDF Downloads 32530 Novel Aspects of Merger Control Pertaining to Nascent Acquisition: An Analytical Legal Research
Authors: Bhargavi G. Iyer, Ojaswi Bhagat
Abstract:
It is often noted that the value of a novel idea lies in its successful implementation. However, successful implementation requires the nurturing and encouragement of innovation. Nascent competitors are a true representation of innovation in any given industry. A nascent competitor is an entity whose prospective innovation poses a future threat to an incumbent dominant competitor. While a nascent competitor benefits in several ways, it is also exposed significantly and is at greater risk of facing the brunt of exclusionary practises and abusive conduct by dominant incumbent competitors in the industry. This research paper aims to explore the risks and threats faced by nascent competitors and analyse the benefits they accrue as well as the advantages they proffer to the economy; through an analytical, critical study. In such competitive market environments, a rise of the acquisitions of nascent competitors by the incumbent dominants is observed. Therefore, this paper will examine the dynamics of nascent acquisition. Further, this paper hopes to specifically delve into the role of antitrust bodies in regulating nascent acquisition. This paper also aspires to deal with the question how to distinguish harmful from harmless acquisitions in order to facilitate ideal enforcement practice. This paper proposes mechanisms of scrutiny in order to ensure healthy market practises and efficient merger control in the context of nascent acquisitions. Taking into account the scope and nature of the topic, as well as the resources available and accessible, a combination of the methods of doctrinal research and analytical research were employed, utilising secondary sources in order to assess and analyse the subject of research. While legally evaluating the Killer Acquisition theory and the Nascent Potential Acquisition theory, this paper seeks to critically survey the precedents and instances of nascent acquisitions. In addition to affording a compendious account of the legislative framework and regulatory mechanisms in the United States, the United Kingdom, and the European Union; it hopes to suggest an internationally practicable legal foundation for domestic legislation and enforcement to adopt. This paper hopes to appreciate the complexities and uncertainties with respect to nascent acquisitions and attempts to suggest viable and plausible policy measures in antitrust law. It additionally attempts to examine the effects of such nascent acquisitions upon the consumer and the market economy. This paper weighs the argument of shifting the evidentiary burden on to the merging parties in order to improve merger control and regulation and expounds on its discovery of the strengths and weaknesses of the approach. It is posited that an effective combination of factual, legal, and economic analysis of both the acquired and acquiring companies possesses the potential to improve ex post and ex ante merger review outcomes involving nascent companies; thus, preventing anti-competitive practises. This paper concludes with an analysis of the possibility and feasibility of industry-specific identification of anti-competitive nascent acquisitions and implementation of measures accordingly.Keywords: acquisition, antitrust law, exclusionary practises merger control, nascent competitor
Procedia PDF Downloads 16129 Web-Based Decision Support Systems and Intelligent Decision-Making: A Systematic Analysis
Authors: Serhat Tüzün, Tufan Demirel
Abstract:
Decision Support Systems (DSS) have been investigated by researchers and technologists for more than 35 years. This paper analyses the developments in the architecture and software of these systems, provides a systematic analysis for different Web-based DSS approaches and Intelligent Decision-making Technologies (IDT), with the suggestion for future studies. Decision Support Systems literature begins with building model-oriented DSS in the late 1960s, theory developments in the 1970s, and the implementation of financial planning systems and Group DSS in the early and mid-80s. Then it documents the origins of Executive Information Systems, online analytic processing (OLAP) and Business Intelligence. The implementation of Web-based DSS occurred in the mid-1990s. With the beginning of the new millennia, intelligence is the main focus on DSS studies. Web-based technologies are having a major impact on design, development and implementation processes for all types of DSS. Web technologies are being utilized for the development of DSS tools by leading developers of decision support technologies. Major companies are encouraging its customers to port their DSS applications, such as data mining, customer relationship management (CRM) and OLAP systems, to a web-based environment. Similarly, real-time data fed from manufacturing plants are now helping floor managers make decisions regarding production adjustment to ensure that high-quality products are produced and delivered. Web-based DSS are being employed by organizations as decision aids for employees as well as customers. A common usage of Web-based DSS has been to assist customers configure product and service according to their needs. These systems allow individual customers to design their own products by choosing from a menu of attributes, components, prices and delivery options. The Intelligent Decision-making Technologies (IDT) domain is a fast growing area of research that integrates various aspects of computer science and information systems. This includes intelligent systems, intelligent technology, intelligent agents, artificial intelligence, fuzzy logic, neural networks, machine learning, knowledge discovery, computational intelligence, data science, big data analytics, inference engines, recommender systems or engines, and a variety of related disciplines. Innovative applications that emerge using IDT often have a significant impact on decision-making processes in government, industry, business, and academia in general. This is particularly pronounced in finance, accounting, healthcare, computer networks, real-time safety monitoring and crisis response systems. Similarly, IDT is commonly used in military decision-making systems, security, marketing, stock market prediction, and robotics. Even though lots of research studies have been conducted on Decision Support Systems, a systematic analysis on the subject is still missing. Because of this necessity, this paper has been prepared to search recent articles about the DSS. The literature has been deeply reviewed and by classifying previous studies according to their preferences, taxonomy for DSS has been prepared. With the aid of the taxonomic review and the recent developments over the subject, this study aims to analyze the future trends in decision support systems.Keywords: decision support systems, intelligent decision-making, systematic analysis, taxonomic review
Procedia PDF Downloads 27928 Entrepreneurial Venture Creation through Anchor Event Activities: Pop-Up Stores as On-Site Arenas
Authors: Birgit A. A. Solem, Kristin Bentsen
Abstract:
Scholarly attention in entrepreneurship is currently directed towards understanding entrepreneurial venture creation as a process -the journey of new economic activities from nonexistence to existence often studied through flow- or network models. To complement existing research on entrepreneurial venture creation with more interactivity-based research of organized activities, this study examines two pop-up stores as anchor events involving on-site activities of fifteen participating entrepreneurs launching their new ventures. The pop-up stores were arranged in two middle-sized Norwegian cities and contained different brand stores that brought together actors of sub-networks and communities executing venture creation activities. The pop-up stores became on-site arenas for the entrepreneurs to create, maintain, and rejuvenate their networks, at the same time as becoming venues for temporal coordination of activities involving existing and potential customers in their venture creation. In this work, we apply a conceptual framework based on frequently addressed dilemmas within entrepreneurship theory (discovery/creation, causation/effectuation) to further shed light on the broad aspect of on-site anchor event activities and their venture creation outcomes. The dilemma-based concepts are applied as an analytic toolkit to pursue answers regarding the nature of anchor event activities typically found within entrepreneurial venture creation and how these anchor event activities affect entrepreneurial venture creation outcomes. Our study combines researcher participation with 200 hours of observation and twenty in-depth interviews. Data analysis followed established guidelines for hermeneutic analysis and was intimately intertwined with ongoing data collection. Data was coded and categorized in NVivo 12 software, and iterated several times as patterns were steadily developing. Our findings suggest that core anchor event activities typically found within entrepreneurial venture creation are; a concept- and product experimentation with visitors, arrangements to socialize (evening specials, auctions, and exhibitions), store-in-store concepts, arranged meeting places for peers and close connection with municipality and property owners. Further, this work points to four main entrepreneurial venture creation outcomes derived from the core anchor event activities; (1) venture attention, (2) venture idea-realization, (3) venture collaboration, and (4) venture extension. Our findings show that, depending on which anchor event activities are applied, the outcomes vary. Theoretically, this study offers two main implications. First, anchor event activities are both discovered and created, following the logic of causation, at the same time as being experimental, based on “learning by doing” principles of effectuation during the execution. Second, our research enriches prior studies on venture creation as a process. In this work, entrepreneurial venture creation activities and outcomes are understood through pop-up stores as on-site anchor event arenas, particularly suitable for interactivity-based research requested by the entrepreneurship field. This study also reveals important managerial implications, such as that entrepreneurs should allow themselves to find creative physical venture creation arenas (e.g., pop-up stores, showrooms), as well as collaborate with partners when discovering and creating concepts and activities based on new ideas. In this way, they allow themselves to both strategically plan for- and continually experiment with their venture.Keywords: anchor event, interactivity-based research, pop-up store, entrepreneurial venture creation
Procedia PDF Downloads 9127 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 5426 Integrating Data Mining within a Strategic Knowledge Management Framework: A Platform for Sustainable Competitive Advantage within the Australian Minerals and Metals Mining Sector
Authors: Sanaz Moayer, Fang Huang, Scott Gardner
Abstract:
In the highly leveraged business world of today, an organisation’s success depends on how it can manage and organize its traditional and intangible assets. In the knowledge-based economy, knowledge as a valuable asset gives enduring capability to firms competing in rapidly shifting global markets. It can be argued that ability to create unique knowledge assets by configuring ICT and human capabilities, will be a defining factor for international competitive advantage in the mid-21st century. The concept of KM is recognized in the strategy literature, and increasingly by senior decision-makers (particularly in large firms which can achieve scalable benefits), as an important vehicle for stimulating innovation and organisational performance in the knowledge economy. This thinking has been evident in professional services and other knowledge intensive industries for over a decade. It highlights the importance of social capital and the value of the intellectual capital embedded in social and professional networks, complementing the traditional focus on creation of intellectual property assets. Despite the growing interest in KM within professional services there has been limited discussion in relation to multinational resource based industries such as mining and petroleum where the focus has been principally on global portfolio optimization with economies of scale, process efficiencies and cost reduction. The Australian minerals and metals mining industry, although traditionally viewed as capital intensive, employs a significant number of knowledge workers notably- engineers, geologists, highly skilled technicians, legal, finance, accounting, ICT and contracts specialists working in projects or functions, representing potential knowledge silos within the organisation. This silo effect arguably inhibits knowledge sharing and retention by disaggregating corporate memory, with increased operational and project continuity risk. It also may limit the potential for process, product, and service innovation. In this paper the strategic application of knowledge management incorporating contemporary ICT platforms and data mining practices is explored as an important enabler for knowledge discovery, reduction of risk, and retention of corporate knowledge in resource based industries. With reference to the relevant strategy, management, and information systems literature, this paper highlights possible connections (currently undergoing empirical testing), between an Strategic Knowledge Management (SKM) framework incorporating supportive Data Mining (DM) practices and competitive advantage for multinational firms operating within the Australian resource sector. We also propose based on a review of the relevant literature that more effective management of soft and hard systems knowledge is crucial for major Australian firms in all sectors seeking to improve organisational performance through the human and technological capability captured in organisational networks.Keywords: competitive advantage, data mining, mining organisation, strategic knowledge management
Procedia PDF Downloads 41525 Role of Functional Divergence in Specific Inhibitor Design: Using γ-Glutamyltranspeptidase (GGT) as a Model Protein
Authors: Ved Vrat Verma, Rani Gupta, Manisha Goel
Abstract:
γ-glutamyltranspeptidase (GGT: EC 2.3.2.2) is an N-terminal nucleophile hydrolase conserved in all three domains of life. GGT plays a key role in glutathione metabolism where it catalyzes the breakage of the γ-glutamyl bonds and transfer of γ-glutamyl group to water (hydrolytic activity) or amino acids or short peptides (transpeptidase activity). GGTs from bacteria, archaea, and eukaryotes (human, rat and mouse) are homologous proteins sharing >50% sequence similarity and conserved four layered αββα sandwich like three dimensional structural fold. These proteins though similar in their structure to each other, are quite diverse in their enzyme activity: some GGTs are better at hydrolysis reactions but poor in transpeptidase activity, whereas many others may show opposite behaviour. GGT is known to be involved in various diseases like asthma, parkinson, arthritis, and gastric cancer. Its inhibition prior to chemotherapy treatments has been shown to sensitize tumours to the treatment. Microbial GGT is known to be a virulence factor too, important for the colonization of bacteria in host. However, all known inhibitors (mimics of its native substrate, glutamate) are highly toxic because they interfere with other enzyme pathways. However, a few successful efforts have been reported previously in designing species specific inhibitors. We aim to leverage the diversity seen in GGT family (pathogen vs. eukaryotes) for designing specific inhibitors. Thus, in the present study, we have used DIVERGE software to identify sites in GGT proteins, which are crucial for the functional and structural divergence of these proteins. Since, type II divergence sites vary in clade specific manner, so type II divergent sites were our focus of interest throughout the study. Type II divergent sites were identified for pathogen vs. eukaryotes clusters and sites were marked on clade specific representative structures HpGGT (2QM6) and HmGGT (4ZCG) of pathogen and eukaryotes clade respectively. The crucial divergent sites within 15 A radii of the binding cavity were highlighted, and in-silico mutations were performed on these sites to delineate the role of these sites on the mechanism of catalysis and protein folding. Further, the amino acid network (AAN) analysis was also performed by Cytoscape to delineate assortative mixing for cavity divergent sites which could strengthen our hypothesis. Additionally, molecular dynamics simulations were performed for wild complexes and mutant complexes close to physiological conditions (pH 7.0, 0.1 M ionic strength and 1 atm pressure) and the role of putative divergence sites and structural integrities of the homologous proteins have been analysed. The dynamics data were scrutinized in terms of RMSD, RMSF, non-native H-bonds and salt bridges. The RMSD, RMSF fluctuations of proteins complexes are compared, and the changes at protein ligand binding sites were highlighted. The outcomes of our study highlighted some crucial divergent sites which could be used for novel inhibitors designing in a species-specific manner. Since, for drug development, it is challenging to design novel drug by targeting similar protein which exists in eukaryotes, so this study could set up an initial platform to overcome this challenge and help to deduce the more effective targets for novel drug discovery.Keywords: γ-glutamyltranspeptidase, divergence, species-specific, drug design
Procedia PDF Downloads 26924 Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: transformers, generative ai, gene expression design, classification
Procedia PDF Downloads 5923 Decrease in Olfactory Cortex Volume and Alterations in Caspase Expression in the Olfactory Bulb in the Pathogenesis of Alzheimer’s Disease
Authors: Majed Al Otaibi, Melissa Lessard-Beaudoin, Amel Loudghi, Raphael Chouinard-Watkins, Melanie Plourde, Frederic Calon, C. Alexandre Castellano, Stephen Cunnane, Helene Payette, Pierrette Gaudreau, Denis Gris, Rona K. Graham
Abstract:
Introduction: Alzheimer disease (AD) is a chronic disorder that affects millions of individuals worldwide. Symptoms include memory dysfunction, and also alterations in attention, planning, language and overall cognitive function. Olfactory dysfunction is a common symptom of several neurological disorders including AD. Studying the mechanisms underlying the olfactory dysfunction may therefore lead to the discovery of potential biomarkers and/or treatments for neurodegenerative diseases. Objectives: To determine if olfactory dysfunction predicts future cognitive impairment in the aging population and to characterize the olfactory system in a murine model expressing a genetic factor of AD. Method: For the human study, quantitative olfactory tests (UPSIT and OMT) have been done on 93 subjects (aged 80 to 94 years) from the Quebec Longitudinal Study on Nutrition and Successful Aging (NuAge) cohort accepting to participate in the ORCA secondary study. The telephone Modified Mini Mental State examination (t-MMSE) was used to assess cognition levels, and an olfactory self-report was also collected. In a separate cohort, olfactory cortical volume was calculated using MRI results from healthy old adults (n=25) and patients with AD (n=18) using the AAL single-subject atlas and performed with the PNEURO tool (PMOD 3.7). For the murine study, we are using Western blotting, RT-PCR and immunohistochemistry. Result: Human Study: Based on the self-report, 81% of the participants claimed to not suffer from any problem with olfaction. However, based on the UPSIT, 94% of those subjects showed a poor olfactory performance and different forms of microsmia. Moreover, the results confirm that olfactory function declines with age. We also detected a significant decrease in olfactory cortical volume in AD individuals compared to controls. Murine study: Preliminary data demonstrate there is a significant decrease in expression levels of the proform of caspase-3 and the caspase substrate STK3, in the olfactory bulb of mice expressing human APOE4 compared with controls. In addition, there is a significant decrease in the expression level of the caspase-9 proform and caspase-8 active fragment. Analysis of the mature neuron marker, NeuN, shows decreased expression levels of both isoforms. The data also suggest that Iba-1 immunostaining is increased in the olfactory bulb of APOE4 mice compared to wild type mice. Conclusions: The activation of caspase-3 may be the cause of the decreased levels of STK3 through caspase cleavage and may play role in the inflammation observed. In the clinical study, our results suggest that seniors are unaware of their olfactory function status and therefore it is not sufficient to measure olfaction using the self-report in the elderly. Studying olfactory function and cognitive performance in the aging population will help to discover biomarkers in the early stage of the AD.Keywords: Alzheimer's disease, APOE4, cognition, caspase, brain atrophy, neurodegenerative, olfactory dysfunction
Procedia PDF Downloads 25822 Differential Expression Analysis of Busseola fusca Larval Transcriptome in Response to Cry1Ab Toxin Challenge
Authors: Bianca Peterson, Tomasz J. Sańko, Carlos C. Bezuidenhout, Johnnie Van Den Berg
Abstract:
Busseola fusca (Fuller) (Lepidoptera: Noctuidae), the maize stem borer, is a major pest in sub-Saharan Africa. It causes economic damage to maize and sorghum crops and has evolved non-recessive resistance to genetically modified (GM) maize expressing the Cry1Ab insecticidal toxin. Since B. fusca is a non-model organism, very little genomic information is publicly available, and is limited to some cytochrome c oxidase I, cytochrome b, and microsatellite data. The biology of B. fusca is well-described, but still poorly understood. This, in combination with its larval-specific behavior, may pose problems for limiting the spread of current resistant B. fusca populations or preventing resistance evolution in other susceptible populations. As part of on-going research into resistance evolution, B. fusca larvae were collected from Bt and non-Bt maize in South Africa, followed by RNA isolation (15 specimens) and sequencing on the Illumina HiSeq 2500 platform. Quality of reads was assessed with FastQC, after which Trimmomatic was used to trim adapters and remove low quality, short reads. Trinity was used for the de novo assembly, whereas TransRate was used for assembly quality assessment. Transcript identification employed BLAST (BLASTn, BLASTp, and tBLASTx comparisons), for which two libraries (nucleotide and protein) were created from 3.27 million lepidopteran sequences. Several transcripts that have previously been implicated in Cry toxin resistance was identified for B. fusca. These included aminopeptidase N, cadherin, alkaline phosphatase, ATP-binding cassette transporter proteins, and mitogen-activated protein kinase. MEGA7 was used to align these transcripts to reference sequences from Lepidoptera to detect mutations that might potentially be contributing to Cry toxin resistance in this pest. RSEM and Bioconductor were used to perform differential gene expression analysis on groups of B. fusca larvae challenged and unchallenged with the Cry1Ab toxin. Pairwise expression comparisons of transcripts that were at least 16-fold expressed at a false-discovery corrected statistical significance (p) ≤ 0.001 were extracted and visualized in a hierarchically clustered heatmap using R. A total of 329,194 transcripts with an N50 of 1,019 bp were generated from the over 167.5 million high-quality paired-end reads. Furthermore, 110 transcripts were over 10 kbp long, of which the largest one was 29,395 bp. BLAST comparisons resulted in identification of 157,099 (47.72%) transcripts, among which only 3,718 (2.37%) were identified as Cry toxin receptors from lepidopteran insects. According to transcript expression profiles, transcripts were grouped into three subclusters according to the similarity of their expression patterns. Several immune-related transcripts (pathogen recognition receptors, antimicrobial peptides, and inhibitors) were up-regulated in the larvae feeding on Bt maize, indicating an enhanced immune status in response to toxin exposure. Above all, extremely up-regulated arylphorin genes suggest that enhanced epithelial healing is one of the resistance mechanisms employed by B. fusca larvae against the Cry1Ab toxin. This study is the first to provide a resource base and some insights into a potential mechanism of Cry1Ab toxin resistance in B. fusca. Transcriptomic data generated in this study allows identification of genes that can be targeted by biotechnological improvements of GM crops.Keywords: epithelial healing, Lepidoptera, resistance, transcriptome
Procedia PDF Downloads 20321 Awareness Creation of Benefits of Antitrypsin-Free Nutraceutical Biopowder for Increasing Human Serum Albumin Synthesis as Possible Adjunct for Management of MDRTB or MDRTB-HIV Patients
Authors: Vincent Oghenekevbe Olughor, Olusoji Mayowa Ige
Abstract:
Except for a preexisting liver disease and malnutrition, there are no predilections for low serum albumin (SA) levels in humans. At normal reference levels (4.0-6.0g/dl) SA is a universal marker for mortality and morbidity risks assessments where depletion by 1.0g/dl increases mortality risk by 137% and morbidity by 89%.It has 40 known functions contributing significantly to the sustenance of human life. A depletion in SA to <2.2g/dl, in most clinical settings worldwide, leads to loss of oncotic pressure of blood causing clinical manifestations of bipedal Oedema, in which the patients remain conscious. SA also contributes significantly to buffering of blood to a life-sustaining pH of 7.35-7.45. A drop in blood pH to <6.9 will lead to instant coma and death, which can occur after SA continues to deplete after manifestations of bipedal Oedema. In an intervention study conducted in 2014 following the discovery that “SA is depleted during malaria fever”, a Nutraceutical formulated for use as treatment adjunct to prevent SA depletions during malaria to <2.4g/dl after Efficacy testing was found to be satisfactory. There are five known types of Malaria caused by Apicomplexan parasites, Plasmodium: the most lethal being that caused by Plasmodium falciparum causing malignant tertian malaria, in which the fever was occurring every 48 hours coincides with the dumping of malaria-toxins (Hemozoin) into blood, causing contamination: blood must remain sterile. Other Apicomplexan parasites, Toxoplasma and Cryptosporidium, are opportunistic infections of HIV. Separate studies showed SA depletions in MDRTB (multidrug resistant TB), and MDRTB-HIV patients by the same mechanism discovered with malaria and such depletions will be further complicated whenever Apicomplexan parasitic infections co-exist. Both Apicomplexan parasites and the TB parasite belong to the Obligate-group of Parasites, which are parasites that replicate only inside its host; and most of them have capacities to over-consume host nutrients during parasitaemia. In MDRTB patients the body attempts repeatedly to prevent depletions in SA to critical levels in the presence of adequate nutrients and only for a while in MDRTB-HIV patients. These groups of patients will, therefore, benefit from the already tested Nutraceutical in malaria patients. The Nutraceutical bio-Powder was formulated (to BP 1988 specification) from twelve nature-based food-grade nutrients containing all dedicated nutrients for ensuring improved synthesis of Albumin by the liver. The Nutraceutical was administered daily for 38±2days in 23 children, in a prospective phase-2 clinical trial, and its impact on body weight and core blood parameters were documented at the start and end of efficacy testing period. Sixteen children who did not experience malaria-induced depletions of SA had significant SA increase; seven children who experienced malaria-induced depletions of SA had insignificant SA decrease. The Packed Cell Volume Percentage (PCV %), a measure of the Oxygen carrying capacity of blood and the amount of nutrients the body can absorb, increased in both groups. The total serum proteins (SA+ Globulins) increased or decreased within the continuum of normal. In conclusion, MDRTB and MDRTB-HIV patients will benefit from a variant of this Nutraceutical when used as treatment adjunct.Keywords: antitrypsin-free Nutraceutical, apicomplexan parasites, no predilections for low serum albumin, toxoplasmosis
Procedia PDF Downloads 28820 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers
Procedia PDF Downloads 6119 A Postmodern Framework for Quranic Hermeneutics
Authors: Christiane Paulus
Abstract:
Post-Islamism assumes that the Quran should not be viewed in terms of what Lyotard identifies as a ‘meta-narrative'. However, its socio-ethical content can be viewed as critical of power discourse (Foucault). Practicing religion seems to be limited to rites and individual spirituality, taqwa. Alternatively, can we build on Muhammad Abduh's classic-modern reform and develop it through a postmodernist frame? This is the main question of this study. Through his general and vague remarks on the context of the Quran, Abduh was the first to refer to the historical and cultural distance of the text as an obstacle for interpretation. His application, however, corresponded to the modern absolute idea of authentic sharia. He was followed by Amin al-Khuli, who hermeneutically linked the content of the Quran to the theory of evolution. Fazlur Rahman and Nasr Hamid abu Zeid remain reluctant to go beyond the general level in terms of context. The hermeneutic circle, therefore, persists in challenging, how to get out to overcome one’s own assumptions. The insight into and the acceptance of the lasting ambivalence of understanding can be grasped as a postmodern approach; it is documented in Derrida's discovery of the shift in text meanings, difference, also in Lyotard's theory of différend. The resulting mixture of meanings (Wolfgang Welsch) can be read together with the classic ambiguity of the premodern interpreters of the Quran (Thomas Bauer). Confronting hermeneutic difficulties in general, Niklas Luhmann proves every description an attribution, tautology, i.e., remaining in the circle. ‘De-tautologization’ is possible, namely by analyzing the distinctions in the sense of objective, temporal and social information that every text contains. This could be expanded with the Kantian aesthetic dimension of reason (critique of pure judgment) corresponding to the iʽgaz of the Coran. Luhmann asks, ‘What distinction does the observer/author make?’ Quran as a speech from God to the first listeners could be seen as a discourse responding to the problems of everyday life of that time, which can be viewed as the general goal of the entire Qoran. Through reconstructing koranic Lifeworlds (Alfred Schütz) in detail, the social structure crystallizes the socio-economic differences, the enormous poverty. The koranic instruction to provide the basic needs for the neglected groups, which often intersect (old, poor, slaves, women, children), can be seen immediately in the text. First, the references to lifeworlds/social problems and discourses in longer koranic passages should be hypothesized. Subsequently, information from the classic commentaries could be extracted, the classical Tafseer, in particular, contains rich narrative material for reconstructing. By selecting and assigning suitable, specific context information, the meaning of the description becomes condensed (Clifford Geertz). In this manner, the text gets necessarily an alienation and is newly accessible. The socio-ethical implications can thus be grasped from the difference of the original problem and the revealed/improved order/procedure; this small step can be materialized as such, not as an absolute solution but as offering plausible patterns for today’s challenges as the Agenda 2030.Keywords: postmodern hermeneutics, condensed description, sociological approach, small steps of reform
Procedia PDF Downloads 21918 Medicompills Architecture: A Mathematical Precise Tool to Reduce the Risk of Diagnosis Errors on Precise Medicine
Authors: Adriana Haulica
Abstract:
Powered by Machine Learning, Precise medicine is tailored by now to use genetic and molecular profiling, with the aim of optimizing the therapeutic benefits for cohorts of patients. As the majority of Machine Language algorithms come from heuristics, the outputs have contextual validity. This is not very restrictive in the sense that medicine itself is not an exact science. Meanwhile, the progress made in Molecular Biology, Bioinformatics, Computational Biology, and Precise Medicine, correlated with the huge amount of human biology data and the increase in computational power, opens new healthcare challenges. A more accurate diagnosis is needed along with real-time treatments by processing as much as possible from the available information. The purpose of this paper is to present a deeper vision for the future of Artificial Intelligence in Precise medicine. In fact, actual Machine Learning algorithms use standard mathematical knowledge, mostly Euclidian metrics and standard computation rules. The loss of information arising from the classical methods prevents obtaining 100% evidence on the diagnosis process. To overcome these problems, we introduce MEDICOMPILLS, a new architectural concept tool of information processing in Precise medicine that delivers diagnosis and therapy advice. This tool processes poly-field digital resources: global knowledge related to biomedicine in a direct or indirect manner but also technical databases, Natural Language Processing algorithms, and strong class optimization functions. As the name suggests, the heart of this tool is a compiler. The approach is completely new, tailored for omics and clinical data. Firstly, the intrinsic biological intuition is different from the well-known “a needle in a haystack” approach usually used when Machine Learning algorithms have to process differential genomic or molecular data to find biomarkers. Also, even if the input is seized from various types of data, the working engine inside the MEDICOMPILLS does not search for patterns as an integrative tool. This approach deciphers the biological meaning of input data up to the metabolic and physiologic mechanisms, based on a compiler with grammars issued from bio-algebra-inspired mathematics. It translates input data into bio-semantic units with the help of contextual information iteratively until Bio-Logical operations can be performed on the base of the “common denominator “rule. The rigorousness of MEDICOMPILLS comes from the structure of the contextual information on functions, built to be analogous to mathematical “proofs”. The major impact of this architecture is expressed by the high accuracy of the diagnosis. Detected as a multiple conditions diagnostic, constituted by some main diseases along with unhealthy biological states, this format is highly suitable for therapy proposal and disease prevention. The use of MEDICOMPILLS architecture is highly beneficial for the healthcare industry. The expectation is to generate a strategic trend in Precise medicine, making medicine more like an exact science and reducing the considerable risk of errors in diagnostics and therapies. The tool can be used by pharmaceutical laboratories for the discovery of new cures. It will also contribute to better design of clinical trials and speed them up.Keywords: bio-semantic units, multiple conditions diagnosis, NLP, omics
Procedia PDF Downloads 7017 Broad Host Range Bacteriophage Cocktail for Reduction of Staphylococcus aureus as Potential Therapy for Atopic Dermatitis
Authors: Tamar Lin, Nufar Buchshtab, Yifat Elharar, Julian Nicenboim, Rotem Edgar, Iddo Weiner, Lior Zelcbuch, Ariel Cohen, Sharon Kredo-Russo, Inbar Gahali-Sass, Naomi Zak, Sailaja Puttagunta, Merav Bassan
Abstract:
Background: Atopic dermatitis (AD) is a chronic, relapsing inflammatory skin disorder that is characterized by dry skin and flares of eczematous lesions and intense pruritus. Multiple lines of evidence suggest that AD is associated with increased colonization by Staphylococcus aureus, which contributes to disease pathogenesis through the release of virulence factors that affect both keratinocytes and immune cells, leading to disruption of the skin barrier and immune cell dysfunction. The aim of the current study is to develop a bacteriophage-based product that specifically targets S. aureus. Methods: For the discovery of phage, environmental samples were screened on 118 S. aureus strains isolated from skin samples, followed by multiple enrichment steps. Natural phages were isolated, subjected to Next-generation Sequencing (NGS), and analyzed using proprietary bioinformatics tools for undesirable genes (toxins, antibiotic resistance genes, lysogeny potential), taxonomic classification, and purity. Phage host range was determined by an efficiency of plating (EOP) value above 0.1 and the ability of the cocktail to completely lyse liquid bacterial culture under different growth conditions (e.g., temperature, bacterial stage). Results: Sequencing analysis demonstrated that the 118 S. aureus clinical strains were distributed across the phylogenetic tree of all available Refseq S. aureus (~10,750 strains). Screening environmental samples on the S. aureus isolates resulted in the isolation of 50 lytic phages from different genera, including Silviavirus, Kayvirus, Podoviridae, and a novel unidentified phage. NGS sequencing confirmed the absence of toxic elements in the phages’ genomes. The host range of the individual phages, as measured by the efficiency of plating (EOP), ranged between 41% (48/118) to 79% (93/118). Host range studies in liquid culture revealed that a subset of the phages can infect a broad range of S. aureus strains in different metabolic states, including stationary state. Combining the single-phage EOP results of selected phages resulted in a broad host range cocktail which infected 92% (109/118) of the strains. When tested in vitro in a liquid infection assay, clearance was achieved in 87% (103/118) of the strains, with no evidence of phage resistance throughout the study (24 hours). A S. aureus host was identified that can be used for the production of all the phages in the cocktail at high titers suitable for large-scale manufacturing. This host was validated for the absence of contaminating prophages using advanced NGS methods combined with multiple production cycles. The phages are produced under optimized scale-up conditions and are being used for the development of a topical formulation (BX005) that may be administered to subjects with atopic dermatitis. Conclusions: A cocktail of natural phages targeting S. aureus was effective in reducing bacterial burden across multiple assays. Phage products may offer safe and effective steroid-sparing options for atopic dermatitis.Keywords: atopic dermatitis, bacteriophage cocktail, host range, Staphylococcus aureus
Procedia PDF Downloads 15316 Association between Polygenic Risk of Alzheimer's Dementia, Brain MRI and Cognition in UK Biobank
Authors: Rachana Tank, Donald. M. Lyall, Kristin Flegal, Joey Ward, Jonathan Cavanagh
Abstract:
Alzheimer’s research UK estimates by 2050, 2 million individuals will be living with Late Onset Alzheimer’s disease (LOAD). However, individuals experience considerable cognitive deficits and brain pathology over decades before reaching clinically diagnosable LOAD and studies have utilised gene candidate studies such as genome wide association studies (GWAS) and polygenic risk (PGR) scores to identify high risk individuals and potential pathways. This investigation aims to determine whether high genetic risk of LOAD is associated with worse brain MRI and cognitive performance in healthy older adults within the UK Biobank cohort. Previous studies investigating associations of PGR for LOAD and measures of MRI or cognitive functioning have focused on specific aspects of hippocampal structure, in relatively small sample sizes and with poor ‘controlling’ for confounders such as smoking. Both the sample size of this study and the discovery GWAS sample are bigger than previous studies to our knowledge. Genetic interaction between loci showing largest effects in GWAS have not been extensively studied and it is known that APOE e4 poses the largest genetic risk of LOAD with potential gene-gene and gene-environment interactions of e4, for this reason we also analyse genetic interactions of PGR with the APOE e4 genotype. High genetic loading based on a polygenic risk score of 21 SNPs for LOAD is associated with worse brain MRI and cognitive outcomes in healthy individuals within the UK Biobank cohort. Summary statistics from Kunkle et al., GWAS meta-analyses (case: n=30,344, control: n=52,427) will be used to create polygenic risk scores based on 21 SNPs and analyses will be carried out in N=37,000 participants in the UK Biobank. This will be the largest study to date investigating PGR of LOAD in relation to MRI. MRI outcome measures include WM tracts, structural volumes. Cognitive function measures include reaction time, pairs matching, trail making, digit symbol substitution and prospective memory. Interaction of the APOE e4 alleles and PGR will be analysed by including APOE status as an interaction term coded as either 0, 1 or 2 e4 alleles. Models will be adjusted partially for adjusted for age, BMI, sex, genotyping chip, smoking, depression and social deprivation. Preliminary results suggest PGR score for LOAD is associated with decreased hippocampal volumes including hippocampal body (standardised beta = -0.04, P = 0.022) and tail (standardised beta = -0.037, P = 0.030), but not with hippocampal head. There were also associations of genetic risk with decreased cognitive performance including fluid intelligence (standardised beta = -0.08, P<0.01) and reaction time (standardised beta = 2.04, P<0.01). No genetic interactions were found between APOE e4 dose and PGR score for MRI or cognitive measures. The generalisability of these results is limited by selection bias within the UK Biobank as participants are less likely to be obese, smoke, be socioeconomically deprived and have fewer self-reported health conditions when compared to the general population. Lack of a unified approach or standardised method for calculating genetic risk scores may also be a limitation of these analyses. Further discussion and results are pending.Keywords: Alzheimer's dementia, cognition, polygenic risk, MRI
Procedia PDF Downloads 11315 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets
Authors: Rabindranath Bag, Surjeet Singh
Abstract:
Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique
Procedia PDF Downloads 27414 Developing Pan-University Collaborative Initiatives in Support of Diversity and Inclusive Campuses
Authors: David Philpott, Karen Kennedy
Abstract:
In recognition of an increasingly diverse student population, a Teaching and Learning Framework was developed at Memorial University of Newfoundland. This framework emphasizes work that is engaging, supportive, inclusive, responsive, committed to discovery, and is outcomes-oriented for both educators and learners. The goal of the Teaching and Learning framework was to develop a number of initiatives that builds on existing knowledge, proven programs, and existing supports in order to respond to the specific needs of identified groups of diverse learners: 1) academically vulnerable first year students; 2) students with individual learning needs associated with disorders and/or mental health issues; 3) international students and those from non-western cultures. This session provides an overview of this process. The strategies employed to develop these initiatives were drawn primarily from research on student success and retention (literature review), information on pre-existing programs (environmental scan), an analysis of in-house data on students at our institution; consultations with key informants at all of Memorial’s campuses. The first initiative that emerged from this research was a pilot project proposal for a first-year success program in support of the first-year experience of academically vulnerable students. This program offers a university experience that is enhanced by smaller classes, supplemental instruction, learning communities, and advising sessions. The second initiative that arose under the mandate of the Teaching and Learning Framework was a collaborative effort between two institutions (Memorial University and the College of the North Atlantic). Both institutions participated in a shared conversation to examine programs and services that support an accessible and inclusive environment for students with disorders and/or mental health issues. A report was prepared based on these conversations and an extensive review of research and programs across the country. Efforts are now being made to explore possible initiatives that address culturally diverse and non-traditional learners. While an expanding literature has emerged on diversity in higher education, the process of developing institutional initiatives is usually excluded from such discussions, while the focus remains on effective practice. The proposals that were developed constitute a co-ordination and strengthening of existing services and programs; a weaving of supports to engage a diverse body of students in a sense of community. This presentation will act as a guide through the process of developing projects addressing learner diversity and engage attendees in a discussion of institutional practices that have been implemented in support of overcoming challenges, as well as provide feedback on institutional and student outcomes. The focus of this session will be on effective practice, and will be of particular interest to university administrators, educational developers, and educators wishing to implement similar initiatives on their campuses; possible adaptations for practice will be addressed. A presentation of findings from this research will be followed by an open discussion where the sharing of research, initiatives, and best practices for the enhancement of teaching and learning is welcomed. There is much insight and understanding to be gained through the sharing of ideas and collaborative practice as we move forward to further develop the program and prepare other initiatives in support of diversity and inclusion.Keywords: eco-scale, green analysis, environmentally-friendly, pharmaceuticals analysis
Procedia PDF Downloads 29213 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum
Authors: Divya Palaniappan
Abstract:
Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum
Procedia PDF Downloads 36212 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy
Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos
Abstract:
The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays
Procedia PDF Downloads 18211 Poly(Trimethylene Carbonate)/Poly(ε-Caprolactone) Phase-Separated Triblock Copolymers with Advanced Properties
Authors: Nikola Toshikj, Michel Ramonda, Sylvain Catrouillet, Jean-Jacques Robin, Sebastien Blanquer
Abstract:
Biodegradable and biocompatible block copolymers have risen as the golden materials in both medical and environmental applications. Moreover, if their architecture is of controlled manner, higher applications can be foreseen. In the meantime, organocatalytic ROP has been promoted as more rapid and immaculate route, compared to the traditional organometallic catalysis, towards efficient synthesis of block copolymer architectures. Therefore, herein we report novel organocatalytic pathway with guanidine molecules (TBD) for supported synthesis of trimethylene carbonate initiated by poly(caprolactone) as pre-polymer. Pristine PTMC-b-PCL-b-PTMC block copolymer structure, without any residual products and clear desired block proportions, was achieved under 1.5 hours at room temperature and verified by NMR spectroscopies and size-exclusion chromatography. Besides, when elaborating block copolymer films, further stability and amelioration of mechanical properties can be achieved via additional reticulation step of precedently methacrylated block copolymers. Subsequently, stimulated by the insufficient studies on the phase-separation/crystallinity relationship in these semi-crystalline block copolymer systems, their intrinsic thermal and morphology properties were investigated by differential scanning calorimetry and atomic force microscopy. Firstly, by DSC measurements, the block copolymers with χABN values superior to 20 presented two distinct glass transition temperatures, close to the ones of the respecting homopolymers, demonstrating an initial indication of a phase-separated system. In the interim, the existence of the crystalline phase was supported by the presence of melting temperature. As expected, the crystallinity driven phase-separated morphology predominated in the AFM analysis of the block copolymers. Neither crosslinking at melted state, hence creation of a dense polymer network, disturbed the crystallinity phenomena. However, the later revealed as sensible to rapid liquid nitrogen quenching directly from the melted state. Therefore, AFM analysis of liquid nitrogen quenched and crosslinked block copolymer films demonstrated a thermodynamically driven phase-separation clearly predominating over the originally crystalline one. These AFM films remained stable with their morphology unchanged even after 4 months at room temperature. However, as demonstrated by DSC analysis once rising the temperature above the melting temperature of the PCL block, neither the crosslinking nor the liquid nitrogen quenching shattered the semi-crystalline network, while the access to thermodynamical phase-separated structures was possible for temperatures under the poly (caprolactone) melting point. Precisely this coexistence of dual crosslinked/crystalline networks in the same copolymer structure allowed us to establish, for the first time, the shape-memory properties in such materials, as verified by thermomechanical analysis. Moreover, the response temperature to the material original shape depended on the block copolymer emplacement, hence PTMC or PCL as end-block. Therefore, it has been possible to reach a block copolymer with transition temperature around 40°C thus opening potential real-life medical applications. In conclusion, the initial study of phase-separation/crystallinity relationship in PTMC-b-PCL-b-PTMC block copolymers lead to the discovery of novel shape memory materials with superior properties, widely demanded in modern-life applications.Keywords: biodegradable block copolymers, organocatalytic ROP, self-assembly, shape-memory
Procedia PDF Downloads 12810 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)
Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili
Abstract:
The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals
Procedia PDF Downloads 181