Search results for: stable processes
779 Production of Recombinant Human Serum Albumin in Escherichia coli: A Crucial Biomolecule for Biotechnological and Healthcare Applications
Authors: Ashima Sharma, Tapan K. Chaudhuri
Abstract:
Human Serum Albumin (HSA) is one of the most demanded therapeutic protein with immense biotechnological applications. The current source of HSA is human blood plasma. Blood is a limited and an unsafe source as it possesses the risk of contamination by various blood derived pathogens. This issue led to exploitation of various hosts with the aim to obtain an alternative source for the production of the rHSA. But, till now no host has been proven to be effective commercially for rHSA production because of their respective limitations. Thus, there exists an indispensable need to promote non-animal derived rHSA production. Of all the host systems, Escherichia coli is one of the most convenient hosts which has contributed in the production of more than 30% of the FDA approved recombinant pharmaceuticals. E. coli grows rapidly and its culture reaches high cell density using inexpensive and simple substrates. The fermentation batch turnaround number for E. coli culture is 300 per year, which is far greater than any of the host systems available. Therefore, E. coli derived recombinant products have more economical potential as fermentation processes are cheaper compared to the other expression hosts available. Despite of all the mentioned advantages, E. coli had not been successfully adopted as a host for rHSA production. The major bottleneck in exploiting E. coli as a host for rHSA production was aggregation i.e. majority of the expressed recombinant protein was forming inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA form inclusion body is not preferred because it is tedious, time consuming, laborious and expensive. Because of this limitation, E. coli host system was neglected for rHSA production for last few decades. Considering the advantages of E. coli as a host, the present work has targeted E. coli as an alternate host for rHSA production through resolving the major issue of inclusion body formation associated with it. In the present study, we have developed a novel and innovative method for enhanced soluble and functional production of rHSA in E.coli (~60% of the total expressed rHSA in the soluble fraction) through modulation of the cellular growth, folding and environmental parameters, thereby leading to significantly improved and enhanced -expression levels as well as the functional and soluble proportion of the total expressed rHSA in the cytosolic fraction of the host. Therefore, in the present case we have filled in the gap in the literature, by exploiting the most well studied host system Escherichia coli which is of low cost, fast growing, scalable and ‘yet neglected’, for the enhancement of functional production of HSA- one of the most crucial biomolecule for clinical and biotechnological applications.Keywords: enhanced functional production of rHSA in E. coli, recombinant human serum albumin, recombinant protein expression, recombinant protein processing
Procedia PDF Downloads 345778 In Support of Sustainable Water Resources Development in the Lower Mekong River Basin: Development of Guidelines for Transboundary Environmental Impact Assessment
Authors: Kongmeng Ly
Abstract:
The management of transboundary river basins across developing countries, such as the Lower Mekong River Basin (LMB), is frequently challenging given the development and conservation divergences of the basin countries. Driven by needs to sustain economic performance and reduce poverty, the LMB countries (Cambodia, Lao PDR, Thailand, Viet Nam) are embarking on significant land use changes in the form hydropower dam, to fulfill their energy requirements. This pathway could lead to irreversible changes to the ecosystem of the Mekong River, if not properly managed. Given the uncertain trade-offs of hydropower development and operation, the Lower Mekong River Basin Countries through the technical support of the Mekong River Commission (MRC) Secretariat embarked on decade long the development of Technical Guidelines for Transboundary Environmental Impact Assessment. Through a series of workshops, seminars, national and regional consultations, and pilot studies and further development following the recommendations generated through legal and institutional reviews undertaken over two decades period, the LMB Countries jointly adopted the MRC Technical Guidelines for Transboundary Environmental Impact Assessment (TbEIA Guidelines). These guidelines were developed with particular regard to the experience gained from MRC supported consultations and technical reviews of the Xayaburi Dam Project, Don Sahong Hydropower Project, Pak Beng Hydropower Project, and lessons learned from the Srepok River and Se San River case studies commissioned by the MRC under the generous supports of development partners around the globe. As adopted, the TbEIA Guidelines have been designed as a supporting mechanism to the national EIA legislation, processes and systems in each Member Country. In recognition of the already agreed mechanisms, the TbEIA Guidelines build on and supplement the agreements stipulated in the 1995 Agreement on the Cooperation for the Sustainable Development of the Mekong River Basin and its Procedural Rules, in addressing potential transboundary environmental impacts of development projects and ensuring mutual benefits from the Mekong River and its resources. Since its adoption in 2022, the TbEIA Guidelines have already been voluntary implemented by Lao PDR on its underdevelopment Sekong A Downstream Hydropower Project, located on the Sekong River – a major tributary of the Mekong River. While this implementation is ongoing with results expected in early 2024, the implementation thus far has strengthened cooperation among concerned Member Countries with multiple successful open dialogues organized at national and regional levels. It is hope that lessons learnt from this application would lead to a wider application of the TbEIA Guidelines for future water resources development projects in the LMB.Keywords: transboundary, EIA, lower mekong river basin, mekong river
Procedia PDF Downloads 37777 Circular Economy Maturity Models: A Systematic Literature Review
Authors: Dennis Kreutzer, Sarah Müller-Abdelrazeq, Ingrid Isenhardt
Abstract:
Resource scarcity, energy transition and the planned climate neutrality pose enormous challenges for manufacturing companies. In order to achieve these goals and a holistic sustainable development, the European Union has listed the circular economy as part of the Circular Economy Action Plan. In addition to a reduction in resource consumption, reduced emissions of greenhouse gases and a reduced volume of waste, the principles of the circular economy also offer enormous economic potential for companies, such as the generation of new circular business models. However, many manufacturing companies, especially small and medium-sized enterprises, do not have the necessary capacity to plan their transformation. They need support and strategies on the path to circular transformation, because this change affects not only production but also the entire company. Maturity models offer an approach, as they enable companies to determine the current status of their transformation processes. In addition, companies can use the models to identify transformation strategies and thus promote the transformation process. While maturity models are established in other areas, e.g. IT or project management, only a few circular economy maturity models can be found in the scientific literature. The aim of this paper is to analyse the identified maturity models of the circular economy through a systematic literature review (SLR) and, besides other aspects, to check their completeness as well as their quality. Since the terms "maturity model" and "readiness model" are often used to assess the transformation process, this paper considers both types of models to provide a more comprehensive result. For this purpose, circular economy maturity models at the company (micro) level were identified from the literature, compared, and analysed with regard to their theoretical and methodological structure. A specific focus was placed, on the one hand, on the analysis of the business units considered in the respective models and, on the other hand, on the underlying metrics and indicators in order to determine the individual maturity level of the entire company. The results of the literature review show, for instance, a significant difference in the holism of their assessment framework. Only a few models include the entire company with supporting areas outside the value-creating core process, e.g. strategy and vision. Additionally, there are large differences in the number and type of indicators as well as their metrics. For example, most models often use subjective indicators and very few objective indicators in their surveys. It was also found that there are rarely well-founded thresholds between the levels. Based on the generated results, concrete ideas and proposals for a research agenda in the field of circular economy maturity models are made.Keywords: maturity model, circular economy, transformation, metric, assessment
Procedia PDF Downloads 112776 Prototyping Exercise for the Construction of an Ancestral Violentometer in Buenaventura, Valle Del Cauca
Authors: Mariana Calderón, Paola Montenegro, Diana Moreno
Abstract:
Through this study, it was possible to identify the different levels and types of violence, both individual and collective, experienced by women, girls, and the sexually diverse population of Buenaventura translated from the different tensions and threats against ancestrality and accounting for a social and political context of violence related to race and geopolitical location. These threats are related to: the stigma and oblivion imposed on practices and knowledge; the imposition of the hegemonic culture; the imposition of external customs as a way of erasing ancestrality; the singling out and persecution of those who practice it; the violence that the health system has exercised against ancestral knowledge and practices, especially in the case of midwives; the persecution of the Catholic religion against this knowledge and practices; the difficulties in maintaining the practices in the displacement from rural to urban areas; the use and control of ancestral knowledge and practices by the armed actors; the rejection and stigma exercised by the public forces; and finally, the murder of the wise women at the hands of the armed actors. This research made it possible to understand the importance of using tools such as the violence meter to support processes of resistance to violence against women, girls, and sexually diverse people; however, it is essential that these tools be adapted to the specific contexts of the people. In the analysis of violence, it was possible to identify that these not only affect women, girls, and sexually diverse people individually but also have collective effects that threaten the territory and the ancestral culture to which they belong. Ancestrality has been the object of violence, but at the same time, it has been the place from which resistance has been organized. The identification of the violence suffered by women, girls, and sexually diverse people is also an opportunity to make visible the forms of resistance of women and communities in the face of this violence. This study examines how women, girls, and sexually diverse people in Buenaventura have been exposed to sexism and racism, which historically have been translated into specific forms of violence, in addition to the other forms of violence already identified by the traditional models of the violentometer. A qualitative approach was used in the study. The study included the participation of more than 40 people and two women's organizations from Buenaventura. The participants came from both urban and rural areas of the municipality of Buenaventura and were over 15 years of age. The participation of such a diverse group allowed for the exchange of knowledge and experiences, particularly between younger and older people. The instrument used for the exercise was previously defined with the leaders of the organizations and consisted of four moments that referred to i) ancestry, ii) threats to ancestry, iii) identification of resistance and iv) construction of the ancestral violentometer.Keywords: violence against women, intersectionality, sexual and reproductive rights, black communities
Procedia PDF Downloads 80775 An Analytical Metric and Process for Critical Infrastructure Architecture System Availability Determination in Distributed Computing Environments under Infrastructure Attack
Authors: Vincent Andrew Cappellano
Abstract:
In the early phases of critical infrastructure system design, translating distributed computing requirements to an architecture has risk given the multitude of approaches (e.g., cloud, edge, fog). In many systems, a single requirement for system uptime / availability is used to encompass the system’s intended operations. However, when architected systems may perform to those availability requirements only during normal operations and not during component failure, or during outages caused by adversary attacks on critical infrastructure (e.g., physical, cyber). System designers lack a structured method to evaluate availability requirements against candidate system architectures through deep degradation scenarios (i.e., normal ops all the way down to significant damage of communications or physical nodes). This increases risk of poor selection of a candidate architecture due to the absence of insight into true performance for systems that must operate as a piece of critical infrastructure. This research effort proposes a process to analyze critical infrastructure system availability requirements and a candidate set of systems architectures, producing a metric assessing these architectures over a spectrum of degradations to aid in selecting appropriate resilient architectures. To accomplish this effort, a set of simulation and evaluation efforts are undertaken that will process, in an automated way, a set of sample requirements into a set of potential architectures where system functions and capabilities are distributed across nodes. Nodes and links will have specific characteristics and based on sampled requirements, contribute to the overall system functionality, such that as they are impacted/degraded, the impacted functional availability of a system can be determined. A machine learning reinforcement-based agent will structurally impact the nodes, links, and characteristics (e.g., bandwidth, latency) of a given architecture to provide an assessment of system functional uptime/availability under these scenarios. By varying the intensity of the attack and related aspects, we can create a structured method of evaluating the performance of candidate architectures against each other to create a metric rating its resilience to these attack types/strategies. Through multiple simulation iterations, sufficient data will exist to compare this availability metric, and an architectural recommendation against the baseline requirements, in comparison to existing multi-factor computing architectural selection processes. It is intended that this additional data will create an improvement in the matching of resilient critical infrastructure system requirements to the correct architectures and implementations that will support improved operation during times of system degradation due to failures and infrastructure attacks.Keywords: architecture, resiliency, availability, cyber-attack
Procedia PDF Downloads 106774 Smart Contracts: Bridging the Divide Between Code and Law
Authors: Abeeb Abiodun Bakare
Abstract:
The advent of blockchain technology has birthed a revolutionary innovation: smart contracts. These self-executing contracts, encoded within the immutable ledger of a blockchain, hold the potential to transform the landscape of traditional contractual agreements. This research paper embarks on a comprehensive exploration of the legal implications surrounding smart contracts, delving into their enforceability and their profound impact on traditional contract law. The first section of this paper delves into the foundational principles of smart contracts, elucidating their underlying mechanisms and technological intricacies. By harnessing the power of blockchain technology, smart contracts automate the execution of contractual terms, eliminating the need for intermediaries and enhancing efficiency in commercial transactions. However, this technological marvel raises fundamental questions regarding legal enforceability and compliance with traditional legal frameworks. Moving beyond the realm of technology, the paper proceeds to analyze the legal validity of smart contracts within the context of traditional contract law. Drawing upon established legal principles, such as offer, acceptance, and consideration, we examine the extent to which smart contracts satisfy the requirements for forming a legally binding agreement. Furthermore, we explore the challenges posed by jurisdictional issues as smart contracts transcend physical boundaries and operate within a decentralized network. Central to this analysis is the examination of the role of arbitration and dispute resolution mechanisms in the context of smart contracts. While smart contracts offer unparalleled efficiency and transparency in executing contractual terms, disputes inevitably arise, necessitating mechanisms for resolution. We investigate the feasibility of integrating arbitration clauses within smart contracts, exploring the potential for decentralized arbitration platforms to streamline dispute resolution processes. Moreover, this paper explores the implications of smart contracts for traditional legal intermediaries, such as lawyers and judges. As smart contracts automate the execution of contractual terms, the role of legal professionals in contract drafting and interpretation may undergo significant transformation. We assess the implications of this paradigm shift for legal practice and the broader legal profession. In conclusion, this research paper provides a comprehensive analysis of the legal implications surrounding smart contracts, illuminating the intricate interplay between code and law. While smart contracts offer unprecedented efficiency and transparency in commercial transactions, their legal validity remains subject to scrutiny within traditional legal frameworks. By navigating the complex landscape of smart contract law, we aim to provide insights into the transformative potential of this groundbreaking technology.Keywords: smart-contracts, law, blockchain, legal, technology
Procedia PDF Downloads 43773 Archaic Ontologies Nowadays: Music of Rituals
Authors: Luminiţa Duţică, Gheorghe Duţică
Abstract:
Many of the interrogations or dilemmas of the contemporary world found the answer in what was generically called the appeal to matrix. This genuine spiritual exercise of re-connection of the present to origins, to the primary source, revealed the ontological condition of timelessness, ahistorical, immutable (epi)phenomena, of those pure essences concentrated in the archetypal-referential layer of the human existence. The musical creation was no exception to this trend, the impasse generated by the deterministic excesses of the whole serialism or, conversely, by some questionable results of the extreme indeterminism proper to the avant-garde movements, stimulating the orientation of many composers to rediscover a universal grammar, as an emanation of a new ‘collective’ order (reverse of the utopian individualism). In this context, the music of oral tradition and therefore the world of the ancient modes represented a true revelation for the composers of the twentieth century, who were suddenly in front of some unsuspected (re)sources, with a major impact on all levels of edification of the musical work: morphology, syntax, timbrality, semantics etc. For the contemporary Romanian creators, the music of rituals, existing in the local archaic culture, opened unsuspected perspectives for which it meant to be a synthetic, inclusive and recoverer vision, where the primary (archetypal) genuine elements merge with the latest achievements of language of the European composers. Thus, anchored in a strong and genuine modal source, the compositions analysed in this paper evoke, in a manner as modern as possible, the atmosphere of some ancestral rituals such as: the invocation of rain during the drought (Paparudele, Scaloianul), funeral ceremony (Bocetul), traditions specific to the winter holidays and new year (Colinda, Cântecul de stea, Sorcova, Folklore traditional dances) etc. The reactivity of those rituals in the sound context of the twentieth century meant potentiating or resizing the archaic spirit of the primordial symbolic entities, in terms of some complexity levels generated by the technique of harmonies of chordal layers, of complex aggregates (gravitational or non-gravitational, geometric), of the mixture polyphonies and with global effect (group, mass), by the technique of heterophony, of texture and cluster, leading to the implementation of some processes of collective improvisation and instrumental theatre.Keywords: archetype, improvisation, polyphony, ritual, instrumental theatre
Procedia PDF Downloads 303772 Risk Assessment and Haloacetic Acids Exposure in Drinking Water in Tunja, Colombia
Authors: Bibiana Matilde Bernal Gómez, Manuel Salvador Rodríguez Susa, Mildred Fernanda Lemus Perez
Abstract:
In chlorinated drinking water, Haloacetic acids have been identified and are classified as disinfection byproducts originating from reaction between natural organic matter and/or bromide ions in water sources. These byproducts can be generated through a variety of chemical and pharmaceutical processes. The term ‘Total Haloacetic Acids’ (THAAs) is used to describe the cumulative concentration of dichloroacetic acid, trichloroacetic acid, monochloroacetic acid, monobromoacetic acid, and dibromoacetic acid in water samples, which are usually measured to evaluate water quality. Chronic presence of these acids in drinking water has a risk of cancer in humans. The detection of THAAs for the first time in 15 municipalities of Boyacá was accomplished in 2023. Aim is to describe the correlation between the levels of THAAs and digestive cancer in Tunja, a city in Colombia with higher rates of digestive cancer and to compare the risk across 15 towns, taking into account factors such as water quality. A research project was conducted with the aim of comparing water sources based on the geographical features of the town, describing the disinfection process in 15 municipalities, and exploring physical properties such as water temperature and pH level. The project also involved a study of contact time based on habits documented through a survey, and a comparison of socioeconomic factors and lifestyle, in order to assess the personal risk of exposure. Data on the levels of THAAs were obtained after characterizing the water quality in urban sectors in eight months of 2022. This, based on the protocol described in the Stage 2 DBP of the United States Environmental Protection Agency (USEPA) from 2006, which takes into account the size of the population being supplied. A cancer risk assessment was conducted to evaluate the likelihood of an individual developing cancer due to exposure to pollutants THAAs. The assessment considered exposure methods like oral ingestion, skin absorption, and inhalation. The chronic daily intake (CDI) for these exposure routes was calculated using specific equations. The lifetime cancer risk (LCR) was then determined by adding the cancer risks from the three exposure routes for each HAA. The risk assessment process involved four phases: exposure assessment, toxicity evaluation, data gathering and analysis, and risk definition and management. The results conclude that there is a cumulative higher risk of digestive cancer due to THAAs exposure in drinking water.Keywords: haloacetic acids, drinking water, water quality, cancer risk assessment
Procedia PDF Downloads 56771 The Effects of Self-Reflections on Intercultural Communication Competency: A Case Study of the University of Arkansas-Fort Smith
Authors: JaeYoon Park
Abstract:
The ability to communicate effectively across different cultures is a necessary skill in today’s increasingly globalized world. Intercultural communication competency (ICC) is a way of being that benefits all members of a society in their living, learning, and working environments as well as in the context of mediated communications. This study examines the effects of self-reflection processes on the improvement of intercultural communication skills focusing on college students at the University of Arkansas-Fort Smith. A total of sixty-nine students’ works were analyzed based on the data collected in the past three years (2016, 2017 and 2018). The students in the ‘Culture and Communication’ class, each spring, completed the Diversity Awareness Profile (DAP) survey as a pre- and post-test for the course. DAP is a self-assessment tool designed by Karen Stinson and widely used in college classes, companies, and organizations to evaluate an individual’s behaviors in various intercultural settings. It can assist individuals in becoming more aware of diversity issues and also provide a foundation for developing strategies for modifying any undesirable behavior they may discover in the assessment. In addition to the DAP surveys, the students also submitted self-reflection essays that discussed their own scores. The University of Arkansas-Fort Smith is a small regional university located in the Bible Belt of the United States. White, Christian, working-class students dominate its student population. The students, whose data were collected, were predominantly seniors in college majoring in either Media Communication or International Business. Approximately, 80% of the students increased their scores, and 42% of them moved forward to a new category. The findings also indicate that the students in the underrepresented groups (i.e., women, minority, and international students) show less change in their scores and behaviors than the rest of the students (i.e., white heterosexual male students). These findings, in most part, result from the fact that the underrepresented students were already aware of diversity and intercultural issues through their personal experiences before taking the class. The white heterosexual male students demonstrated the greatest improvements, judging from their DAP scores (pre- and post-tests) and self-reflection essays. Through the class assignments and discussions, which emphasized critical thinking and self-reflection, the latter group of students not only became more aware of the meaning of their own words and behaviors, but they were also able to develop greater proficiency in intercultural communication. This e-poster presentation will analyze the findings of this research data, and also discuss the pedagogical implications of such results.Keywords: cross-cultural communication, diversity awareness survey, self-reflection, underrepresented students
Procedia PDF Downloads 119770 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 123769 Measurement of Fatty Acid Changes in Post-Mortem Belowground Carcass (Sus-scrofa) Decomposition: A Semi-Quantitative Methodology for Determining the Post-Mortem Interval
Authors: Nada R. Abuknesha, John P. Morgan, Andrew J. Searle
Abstract:
Information regarding post-mortem interval (PMI) in criminal investigations is vital to establish a time frame when reconstructing events. PMI is defined as the time period that has elapsed between the occurrence of death and the discovery of the corpse. Adipocere, commonly referred to as ‘grave-wax’, is formed when post-mortem adipose tissue is converted into a solid material that is heavily comprised of fatty acids. Adipocere is of interest to forensic anthropologists, as its formation is able to slow down the decomposition process. Therefore, analysing the changes in the patterns of fatty acids during the early decomposition process may be able to estimate the period of burial, and hence the PMI. The current study concerned the investigation of the fatty acid composition and patterns in buried pig fat tissue. This was in an attempt to determine whether particular patterns of fatty acid composition can be shown to be associated with the duration of the burial, and hence may be used to estimate PMI. The use of adipose tissue from the abdominal region of domestic pigs (Sus-scrofa), was used to model the human decomposition process. 17 x 20cm piece of pork belly was buried in a shallow artificial grave, and weekly samples (n=3) from the buried pig fat tissue were collected over an 11-week period. Marker fatty acids: palmitic (C16:0), oleic (C18:1n-9) and linoleic (C18:2n-6) acid were extracted from the buried pig fat tissue and analysed as fatty acid methyl esters using the gas chromatography system. Levels of the marker fatty acids were quantified from their respective standards. The concentrations of C16:0 (69.2 mg/mL) and C18:1n-9 (44.3 mg/mL) from time zero exhibited significant fluctuations during the burial period. Levels rose (116 and 60.2 mg/mL, respectively) and fell starting from the second week to reach 19.3 and 18.3 mg/mL, respectively at week 6. Levels showed another increase at week 9 (66.3 and 44.1 mg/mL, respectively) followed by gradual decrease at week 10 (20.4 and 18.5 mg/mL, respectively). A sharp increase was observed in the final week (131.2 and 61.1 mg/mL, respectively). Conversely, the levels of C18:2n-6 remained more or less constant throughout the study. In addition to fluctuations in the concentrations, several new fatty acids appeared in the latter weeks. Other fatty acids which were detectable in the time zero sample, were lost in the latter weeks. There are several probable opportunities to utilise fatty acid analysis as a basic technique for approximating PMI: the quantification of marker fatty acids and the detection of selected fatty acids that either disappear or appear during the burial period. This pilot study indicates that this may be a potential semi-quantitative methodology for determining the PMI. Ideally, the analysis of particular fatty acid patterns in the early stages of decomposition could be an additional tool to the already available techniques or methods in improving the overall processes in estimating PMI of a corpse.Keywords: adipocere, fatty acids, gas chromatography, post-mortem interval
Procedia PDF Downloads 131768 Inertial Spreading of Drop on Porous Surfaces
Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi
Abstract:
The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium
Procedia PDF Downloads 137767 Technology Road Mapping in the Fourth Industrial Revolution: A Comprehensive Analysis and Strategic Framework
Authors: Abdul Rahman Hamdan
Abstract:
The Fourth Industrial Revolution (4IR) has brought unprecedented technological advancements that have disrupted many industries worldwide. In keeping up with the technological advances and rapid disruption by the introduction of many technological advancements brought forth by the 4IR, the use of technology road mapping has emerged as one of the critical tools for organizations to leverage. Technology road mapping can be used by many companies to guide them to become more adaptable and anticipate future transformation and innovation, and avoid being redundant or irrelevant due to the rapid changes in technological advancement. This research paper provides a comprehensive analysis of technology road mapping within the context of the 4IR. The objectives of the paper are to provide companies with practical insights and a strategic framework of technology road mapping for them to navigate the fast-changing nature of the 4IR. This study also contributes to the understanding and practice of technology road mapping in the 4IR and, at the same time, provides organizations with the necessary tools and critical insight to navigate the 4IR transformation by leveraging technology road mapping. Based on the literature review and case studies, the study analyses key principles, methodologies, and best practices in technology road mapping and integrates them with the unique characteristics and challenges of the 4IR. The research paper gives the background of the fourth industrial revolution. It explores the disruptive potential of technologies in the 4IR and the critical need for technology road mapping that consists of strategic planning and foresight to remain competitive and relevant in the 4IR era. It also highlights the importance of technology road mapping as an organisation’s proactive approach to align the organisation’s objectives and resources to their technology and product development in meeting the fast-evolving technological 4IR landscape. The paper also includes the theoretical foundations of technology road mapping and examines various methodological approaches, and identifies external stakeholders in the process, such as external experts, stakeholders, collaborative platforms, and cross-functional teams to ensure an integrated and robust technological roadmap for the organisation. Moreover, this study presents a comprehensive framework for technology road mapping in the 4IR by incorporating key elements and processes such as technology assessment, competitive intelligence, risk analysis, and resource allocation. It provides a framework for implementing technology road mapping from strategic planning, goal setting, and technology scanning to road mapping visualisation, implementation planning, monitoring, and evaluation. In addition, the study also addresses the challenges and limitations related to technology roadmapping in 4IR, including the gap analysis. In conclusion of the study, the study will propose a set of practical recommendations for organizations that intend to leverage technology road mapping as a strategic tool in the 4IR in driving innovation and becoming competitive in the current and future ecosystem.Keywords: technology management, technology road mapping, technology transfer, technology planning
Procedia PDF Downloads 67766 The Role of Goal Orientation on the Structural-Psychological Empowerment Link in the Public Sector
Authors: Beatriz Garcia-Juan, Ana B. Escrig-Tena, Vicente Roca-Puig
Abstract:
The aim of this article is to conduct a theoretical and empirical study in order to examine how the goal orientation (GO) of public employees affects the relationship between the structural and psychological empowerment that they experience at their workplaces. In doing so, we follow structural empowerment (SE) and psychological empowerment (PE) conceptualizations, and relate them to the public administration framework. Moreover, we review arguments from GO theories, and previous related contributions. Empowerment has emerged as an important issue in the public sector organization setting in the wake of mainstream New Public Management (NPM), the new orientation in the public sector that aims to provide a better service for citizens. It is closely linked to the drive to improve organizational effectiveness through the wise use of human resources. Nevertheless, it is necessary to combine structural (managerial) and psychological (individual) approaches in an integrative study of empowerment. SE refers to a set of initiatives that aim the transference of power from managerial positions to the rest of employees. PE is defined as psychological state of competence, self-determination, impact, and meaning that an employee feels at work. Linking these two perspectives will lead to arrive at a broader understanding of the empowerment process. Specifically in the public sector, empirical contributions on this relationship are therefore important, particularly as empowerment is a very useful tool with which to face the challenges of the new public context. There is also a need to examine the moderating variables involved in this relationship, as well as to extend research on work motivation in public management. It is proposed the study of the effect of individual orientations, such as GO. GO concept refers to the individual disposition toward developing or confirming one’s capacity in achievement situations. Employees’ GO may be a key factor at work and in workforce selection processes, since it explains the differences in personal work interests, and in receptiveness to and interpretations of professional development activities. SE practices could affect PE feelings in different ways, depending on employees’ GO, since they perceive and respond differently to such practices, which is likely to yield distinct PE results. The model is tested on a sample of 521 Spanish local authority employees. Hierarchical regression analysis was conducted to test the research hypotheses using SPSS 22 computer software. The results do not confirm the direct link between SE and PE, but show that learning goal orientation has considerable moderating power in this relationship, and its interaction with SE affects employees’ PE levels. Therefore, the combination of SE practices and employees’ high levels of LGO are important factors for creating psychologically empowered staff in public organizations.Keywords: goal orientation, moderating effect, psychological empowerment, structural empowerment
Procedia PDF Downloads 281765 Correlation Between Cytokine Levels and Lung Injury in the Syrian Hamster (Mesocricetus Auratus) Covid-19 Model
Authors: Gleb Fomin, Kairat Tabynov, Nurkeldy Turebekov, Dinara Turegeldiyeva, Rinat Islamov
Abstract:
The level of major cytokines in the blood of patients with COVID-19 varies greatly depending on age, gender, duration and severity of infection, and comorbidity. There are two clinically significant cytokines, IL-6 and TNF-α, which increase in levels in patients with severe COVID-19. However, in a model of COVID-19 in hamsters, TNF-α levels are unchanged or reduced, while the expression of other cytokines reflects the profile of cytokines found in patients’ plasma. The aim of our study was to evaluate the relationship between the level of cytokines in the blood, lungs, and lung damage in the model of the Syrian hamster (Mesocricetus auratus) infected with the SARS-CoV-2 strain. The study used outbred female and male Syrian hamsters (n=36, 4 groups) weighing 80-110 g and 5 months old (protocol IACUC, #4, 09/22/2020). Animals were infected intranasally with the hCoV-19/Kazakhstan/KazNAU-NSCEDI-481/2020 strain and euthanized at 3 d.p.i. The level of cytokines IL-6, TNF-α, IFN-α, and IFN-γ was determined by ELISA MyBioSourse (USA) for hamsters. Lung samples were subjected to histological processing. The presence of pathological changes in histological preparations was assessed on a 3-point scale. The work was carried out in the ABSL-3 laboratory. The data were analyzed in GraphPad Prism 6.00 (GraphPad Software, La Jolla, California, USA). The work was supported by the MES RK grant (AP09259865). In the blood, the level of TNF-α increased in males (p=0.0012) and IFN-γ in males and females (p=0.0001). On the contrary, IFN-α production decreased (p=0.0006). Only TNF-α level increased in lung tissues (p=0.0011). Correlation analysis showed a negative relationship between the level of IL-6 in the blood and lung damage in males (r -0.71, p=0.0001) and females (r-0.57, p=0.025). On the contrary, in males, the level of IL-6 in the lungs and score is positively correlated (r 0.80, p=0.01). The level of IFN-γ in the blood (r -0.64, p=0.035) and lungs (r-0.72, p=0.017) in males has a negative correlation with lung damage. No links were found for TNF-α and IFN-α. The study showed a positive association between lung injury and tissue levels of IL-6 in male hamsters. It is known that in humans, high concentrations of IL-6 in the lungs are associated with suppression of cellular immunity and, as a result, with an increase in the severity of COVID-19. TNF-α and IFN-γ play a key role in the pathogenesis of COVID-19 in hamsters. However, the mechanisms of their activity require more detailed study. IFN-α plays a lesser role in direct lung injury in a Syrian hamster model. We have shown the significance of tissue IL-6 and IFN-γ as predictors of the severity of lung damage in COVID-19 in the Syrian hamster model. Changes in the level of cytokines in the blood may not always reflect pathological processes in the lungs with COVID-19.Keywords: syrian hamster, COVID-19, cytokines, biological model
Procedia PDF Downloads 90764 The Features of the Synergistic Approach in Marketing Management to Regional Level
Authors: Evgeni Baratashvili, Anzor Abralava, Rusudan Kutateladze, Nino Pailodze, Irma Makharashvili, Larisa Takalandze
Abstract:
Sinergy as a neological term is reflected in modern sciences. It can be found in the various fields of science including the humanities and technical sciences. Among them are biology and medicine, philology, economy and etc. Synergy is the received surplus of marginal high total effect of the groups, consolidated by one common idea, received through endeavored applies of their combined tools, via obtained effect of the separate independent actions of the groups. In the conditions of market economy, according the terms of new communication terminology, synergy effects on management and marketing successfully as well as on purity defense of native language. The well-known scientist’s and public figure’s Academician I. Prangishvili’s works are especially valuable in this aspect. In our opinion the entropy research is linked to his name in our country. In modern economy, the current qualitative changes shows us that the most number of factors and issues have been regrouped. They have a great influence and even define the economic development. The declining abilities of traditional recourses of economic growth have been related on the use of their physical abilities and their moving closer to the edge. Also it is related on the reduced effectiveness, which at the same time increases the expenditures. This means that the leading must be the innovative process system of products and services in the economic growth model. In our opinion the above mentioned system is distinguished with the synergistic approach. It should be noted that the main components of the innovative system are technological, scientific and scientific-technical, social-organizational, managerial and cognitive changes. All of them are reflected on scientific works and inventions in the proper dosages, in know-how and material source. At any stage they create the reproduction cycle. The innovations are different from each other by technologies, origination, design, innovation and quality, subject-content structure, by the the spread of economic processes and the impact of the level of it’s distribution. We have presented a generalized statement of an innovative approach, which is not a single act of innovation but it is also targeted system of the development, implementation, reconciling-exploitation, production, diffusion and commercialization of news. The innovative approaches should be considered as the creation of news, in-depth process of creativity as an innovative alternative to the realization of innovative and entrepreneurial efforts and measures, in order to meet the requirements of the permanent process.Keywords: economic development, leading process, neological term, synergy
Procedia PDF Downloads 199763 Biofiltration Odour Removal at Wastewater Treatment Plant Using Natural Materials: Pilot Scale Studies
Authors: D. Lopes, I. I. R. Baptista, R. F. Vieira, J. Vaz, H. Varela, O. M. Freitas, V. F. Domingues, R. Jorge, C. Delerue-Matos, S. A. Figueiredo
Abstract:
Deodorization is nowadays a need in wastewater treatment plants. Nitrogen and sulphur compounds, volatile fatty acids, aldehydes and ketones are responsible for the unpleasant odours, being ammonia, hydrogen sulphide and mercaptans the most common pollutants. Although chemical treatments of the air extracted are efficient, these are more expensive than biological treatments, namely due the use of chemical reagents (commonly sulphuric acid, sodium hypochlorite and sodium hydroxide). Biofiltration offers the advantage of avoiding the use of reagents (only in some cases, nutrients are added in order to increase the treatment efficiency) and can be considered a sustainable process when the packing medium used is of natural origin. In this work the application of some natural materials locally available was studied both at laboratory and pilot scale, in a real wastewater treatment plant. The materials selected for this study were indigenous Portuguese forest materials derived from eucalyptus and pinewood, such as woodchips and bark, and coconut fiber was also used for comparison purposes. Their physico-chemical characterization was performed: density, moisture, pH, buffer and water retention capacity. Laboratory studies involved batch adsorption studies for ammonia and hydrogen sulphide removal and evaluation of microbiological activity. Four pilot-scale biofilters (1 cubic meter volume) were installed at a local wastewater treatment plant treating odours from the effluent receiving chamber. Each biofilter contained a different packing material consisting of mixtures of eucalyptus bark, pine woodchips and coconut fiber, with added buffering agents and nutrients. The odour treatment efficiency was monitored over time, as well as other operating parameters. The operation at pilot scale suggested that between the processes involved in biofiltration - adsorption, absorption and biodegradation - the first dominates at the beginning, while the biofilm is developing. When the biofilm is completely established, and the adsorption capacity of the material is reached, biodegradation becomes the most relevant odour removal mechanism. High odour and hydrogen sulphide removal efficiencies were achieved throughout the testing period (over 6 months), confirming the suitability of the materials selected, and mixtures thereof prepared, for biofiltration applications.Keywords: ammonia hydrogen sulphide and removal, biofiltration, natural materials, odour control in wastewater treatment plants
Procedia PDF Downloads 300762 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk
Authors: Michael Mihalicz, Aziz Guergachi
Abstract:
Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.Keywords: decision making, emotions, prospect theory, visceral factors
Procedia PDF Downloads 148761 Systematic Identification of Noncoding Cancer Driver Somatic Mutations
Authors: Zohar Manber, Ran Elkon
Abstract:
Accumulation of somatic mutations (SMs) in the genome is a major driving force of cancer development. Most SMs in the tumor's genome are functionally neutral; however, some cause damage to critical processes and provide the tumor with a selective growth advantage (termed cancer driver mutations). Current research on functional significance of SMs is mainly focused on finding alterations in protein coding sequences. However, the exome comprises only 3% of the human genome, and thus, SMs in the noncoding genome significantly outnumber those that map to protein-coding regions. Although our understanding of noncoding driver SMs is very rudimentary, it is likely that disruption of regulatory elements in the genome is an important, yet largely underexplored mechanism by which somatic mutations contribute to cancer development. The expression of most human genes is controlled by multiple enhancers, and therefore, it is conceivable that regulatory SMs are distributed across different enhancers of the same target gene. Yet, to date, most statistical searches for regulatory SMs have considered each regulatory element individually, which may reduce statistical power. The first challenge in considering the cumulative activity of all the enhancers of a gene as a single unit is to map enhancers to their target promoters. Such mapping defines for each gene its set of regulating enhancers (termed "set of regulatory elements" (SRE)). Considering multiple enhancers of each gene as one unit holds great promise for enhancing the identification of driver regulatory SMs. However, the success of this approach is greatly dependent on the availability of comprehensive and accurate enhancer-promoter (E-P) maps. To date, the discovery of driver regulatory SMs has been hindered by insufficient sample sizes and statistical analyses that often considered each regulatory element separately. In this study, we analyzed more than 2,500 whole-genome sequence (WGS) samples provided by The Cancer Genome Atlas (TCGA) and The International Cancer Genome Consortium (ICGC) in order to identify such driver regulatory SMs. Our analyses took into account the combinatorial aspect of gene regulation by considering all the enhancers that control the same target gene as one unit, based on E-P maps from three genomics resources. The identification of candidate driver noncoding SMs is based on their recurrence. We searched for SREs of genes that are "hotspots" for SMs (that is, they accumulate SMs at a significantly elevated rate). To test the statistical significance of recurrence of SMs within a gene's SRE, we used both global and local background mutation rates. Using this approach, we detected - in seven different cancer types - numerous "hotspots" for SMs. To support the functional significance of these recurrent noncoding SMs, we further examined their association with the expression level of their target gene (using gene expression data provided by the ICGC and TCGA for samples that were also analyzed by WGS).Keywords: cancer genomics, enhancers, noncoding genome, regulatory elements
Procedia PDF Downloads 102760 The Effect of Information vs. Reasoning Gap Tasks on the Frequency of Conversational Strategies and Accuracy in Speaking among Iranian Intermediate EFL Learners
Authors: Hooriya Sadr Dadras, Shiva Seyed Erfani
Abstract:
Speaking skills merit meticulous attention both on the side of the learners and the teachers. In particular, accuracy is a critical component to guarantee the messages to be conveyed through conversation because a wrongful change may adversely alter the content and purpose of the talk. Different types of tasks have served teachers to meet numerous educational objectives. Besides, negotiation of meaning and the use of different strategies have been areas of concern in socio-cultural theories of SLA. Negotiation of meaning is among the conversational processes which have a crucial role in facilitating the understanding and expression of meaning in a given second language. Conversational strategies are used during interaction when there is a breakdown in communication that leads to the interlocutor attempting to remedy the gap through talk. Therefore, this study was an attempt to investigate if there was any significant difference between the effect of reasoning gap tasks and information gap tasks on the frequency of conversational strategies used in negotiation of meaning in classrooms on one hand, and on the accuracy in speaking of Iranian intermediate EFL learners on the other. After a pilot study to check the practicality of the treatments, at the outset of the main study, the Preliminary English Test was administered to ensure the homogeneity of 87 out of 107 participants who attended the intact classes of a 15 session term in one control and two experimental groups. Also, speaking sections of PET were used as pretest and posttest to examine their speaking accuracy. The tests were recorded and transcribed to estimate the percentage of the number of the clauses with no grammatical errors in the total produced clauses to measure the speaking accuracy. In all groups, the grammatical points of accuracy were instructed and the use of conversational strategies was practiced. Then, different kinds of reasoning gap tasks (matchmaking, deciding on the course of action, and working out a time table) and information gap tasks (restoring an incomplete chart, spot the differences, arranging sentences into stories, and guessing game) were manipulated in experimental groups during treatment sessions, and the students were required to practice conversational strategies when doing speaking tasks. The conversations throughout the terms were recorded and transcribed to count the frequency of the conversational strategies used in all groups. The results of statistical analysis demonstrated that applying both the reasoning gap tasks and information gap tasks significantly affected the frequency of conversational strategies through negotiation. In the face of the improvements, the reasoning gap tasks had a more significant impact on encouraging the negotiation of meaning and increasing the number of conversational frequencies every session. The findings also indicated both task types could help learners significantly improve their speaking accuracy. Here, applying the reasoning gap tasks was more effective than the information gap tasks in improving the level of learners’ speaking accuracy.Keywords: accuracy in speaking, conversational strategies, information gap tasks, reasoning gap tasks
Procedia PDF Downloads 308759 The Development of Traffic Devices Using Natural Rubber in Thailand
Authors: Weeradej Cheewapattananuwong, Keeree Srivichian, Godchamon Somchai, Wasin Phusanong, Nontawat Yoddamnern
Abstract:
Natural rubber used for traffic devices in Thailand has been developed and researched for several years. When compared with Dry Rubber Content (DRC), the quality of Rib Smoked Sheet (RSS) is better. However, the cost of admixtures, especially CaCO₃ and sulphur, is higher than the cost of RSS itself. In this research, Flexible Guideposts and Rubber Fender Barriers (RFB) are taken into consideration. In case of flexible guideposts, the materials used are both RSS and DRC60%, but for RFB, only RSS is used due to the controlled performance tests. The objective of flexible guideposts and RFB is to decrease a number of accidents, fatal rates, and serious injuries. Functions of both devices are to save road users and vehicles as well as to absorb impact forces from vehicles so as to decrease of serious road accidents. This leads to the mitigation methods to remedy the injury of motorists, form severity to moderate one. The solution is to find the best practice of traffic devices using natural rubber under the engineering concepts. In addition, the performances of materials, such as tensile strength and durability, are calculated for the modulus of elasticity and properties. In the laboratory, the simulation of crashes, finite element of materials, LRFD, and concrete technology methods are taken into account. After calculation, the trials' compositions of materials are mixed and tested in the laboratory. The tensile test, compressive test, and weathering or durability test are followed and based on ASTM. Furthermore, the Cycle-Repetition Test of Flexible Guideposts will be taken into consideration. The final decision is to fabricate all materials and have a real test section in the field. In RFB test, there will be 13 crash tests, 7 Pickup Truck tests, and 6 Motorcycle Tests. The test of vehicular crashes happens for the first time in Thailand, applying the trial and error methods; for example, the road crash test under the standard of NCHRP-TL3 (100 kph) is changed to the MASH 2016. This is owing to the fact that MASH 2016 is better than NCHRP in terms of speed, types, and weight of vehicles and the angle of crash. In the processes of MASH, Test Level 6 (TL-6), which is composed of 2,270 kg Pickup Truck, 100 kph, and 25 degree of crash-angle is selected. The final test for real crash will be done, and the whole system will be evaluated again in Korea. The researchers hope that the number of road accidents will decrease, and Thailand will be no more in the top tenth ranking of road accidents in the world.Keywords: LRFD, load and resistance factor design, ASTM, american society for testing and materials, NCHRP, national cooperation highway research program, MASH, manual for assessing safety hardware
Procedia PDF Downloads 127758 An Examination of Economic Evaluation Approaches in Mental Health Promotion Initiatives Targeted at Black and Asian Minority Ethnic Communities in the UK: A Critical Discourse Analysis
Authors: Phillipa Denise Peart
Abstract:
Black Asian and Minority Ethnic (BAME) people are more at risk of developing mental health disorders because they are more exposed to unfavorable social, economic, and environmental circumstances. These include housing, education, employment, community development, stigma, and discrimination. However, the majority of BAME mental health intervention studies focus on treatment with therapeutically effective drugs and use basic economic methods to evaluate their effectiveness; as a result, little is invested in the economic assessment of psychosocial interventions in BAME mental health. The UK government’s austerity programme and reduced funds for mental health services, has increased the need for the evaluation and assessment of initiatives to focus on value for money. The No Health without Mental Health policy (2011) provides practice guidance to practitioners, but there is little or no mention of the need to provide mental health initiatives targeted at BAME communities that are effective in terms of their impact and the cost-effectiveness. This, therefore, appears to contradict with and is at odds with the wider political discourse, which suggests there should be an increasing focus on health economic evaluation. As a consequence, it could be argued that whilst such policies provide direction to organisations to provide mental health services to the BAME community, by not requesting effective governance, assurance, and evaluation processes, they are merely paying lip service to address these problems and not helping advance knowledge and practice through evidence-based approaches. As a result, BAME communities suffer due to lack of efficient resources that can aid in the recovery process. This research study explores the mental health initiatives targeted at BAME communities, and analyses the techniques used when examining the cost effectiveness of mental health initiatives for BAME mental health communities. Using critical discourse analysis as an approach and method, mental health services will be selected as case studies, and their evaluations will be examined, alongside the political drivers that frame, shape, and direct their work. In doing so, it will analyse what the mental health policies initiatives are, how the initiatives are directed and demonstrate how economic models of evaluation are used in mental health programmes and how the value for money impacts and outcomes are articulated by mental health programme staff. It is anticipated that this study will further our understanding in order to provide adequate mental health resources and will deliver creative, supportive research to ensure evaluation is effective for the government to provide and maintain high quality and efficient mental health initiatives targeted at BAME communities.Keywords: black, Asian and ethnic minority, economic models, mental health, health policy
Procedia PDF Downloads 110757 The Advancement of Smart Cushion Product and System Design Enhancing Public Health and Well-Being at Workplace
Authors: Dosun Shin, Assegid Kidane, Pavan Turaga
Abstract:
According to the National Institute of Health, living a sedentary lifestyle leads to a number of health issues, including increased risk of cardiovascular dis-ease, type 2 diabetes, obesity, and certain types of cancers. This project brings together experts in multiple disciplines to bring product design, sensor design, algorithms, and health intervention studies to develop a product and system that helps reduce the amount of time sitting at the workplace. This paper illustrates ongoing improvements to prototypes the research team developed in initial research; including working prototypes with a software application, which were developed and demonstrated for users. Additional modifications were made to improve functionality, aesthetics, and ease of use, which will be discussed in this paper. Extending on the foundations created in the initial phase, our approach sought to further improve the product by conducting additional human factor research, studying deficiencies in competitive products, testing various materials/forms, developing working prototypes, and obtaining feedback from additional potential users. The solution consisted of an aesthetically pleasing seat cover cushion that easily attaches to common office chairs found in most workplaces, ensuring a wide variety of people can use the product. The product discreetly contains sensors that track when the user sits on their chair, sending information to a phone app that triggers reminders for users to stand up and move around after sitting for a set amount of time. This paper also presents the analyzed typical office aesthetics and selected materials, colors, and forms that complimented the working environment. Comfort and ease of use remained a high priority as the design team sought to provide a product and system that integrated into the workplace. As the research team continues to test, improve, and implement this solution for the sedentary workplace, the team seeks to create a viable product that acts as an impetus for a more active workday and lifestyle, further decreasing the proliferation of chronic disease and health issues for sedentary working people. This paper illustrates in detail the processes of engineering, product design, methodology, and testing results.Keywords: anti-sedentary work behavior, new product development, sensor design, health intervention studies
Procedia PDF Downloads 157756 Depressive-Like Behavior in a Murine Model of Colorectal Cancer Associated with Altered Cytokine Levels in Stress-Related Brain Regions
Authors: D. O. Miranda, L. R. Azevedo, J. F. C. Cordeiro, A. H. Dos Santos, S. F. Lisboa, F. S. Guimarães, G. S. Bisson
Abstract:
Background: The Colorectal cancer (CRC) is one of the most common cancers and the fourth leading cause of cancer death in the world. The prevalence of psychiatric-disorders among CRC patients, mainly depression, is high, resulting in impaired quality of life and side effects of primary treatment. High levels of proinflammatory cytokines at tumor microenvironment is a feature of CRC and the literature suggests that those mediators could contribute to the development of psychiatric disorders. Nevertheless, the ability of tumor-associated biological processes to affect the central nervous system (CNS) has only recently been explored in the context of symptoms of depression and is still not well understood. Therefore, the aim of the present study was to test the hypothesis that depressive-like behavior in an experimental model of CCR induced by N-methyl-N-nitro-N-nitrosoguanidine (MNNG) was correlated to proinflammatory profile in the periphery and in the brain. Methods: Colorectal carcinogenesis was induced in adult C57BL/6 mice (n=12) by administration of MNNG (5mg/kg, 0.1ml/intrarectal instillation) 2 times a week, for 2 week. Control group (n=12) received saline (0.1ml/intrarectal instillation). Eight weeks after beginning of MNNG administration animals were submitted to the forced swim test (FST) and the sucrose preference test for evaluation, respectively, of depressive- and anhedonia-like behaviors. After behavioral evaluation, the colon was collected and brain regions dissected (cortex-C, striatum-ST and hippocampus-HIP) for posterior evaluation of cytokine levels (IL-1β, IL-10, IL-17, and CX3CL1) by ELISA. Results: MNNG induced depressive-like behavior, represented by increased immobility time in the FST (Student t test, p < 0.05) and lower sucrose preference (Student t test, p < 0.05). Moreover, there were increased levels of IL-1β, IL-17 and CX3CL1 in the colonic tissue (Student t test, p < 0.05) and in the brain (IL-1 β in the ST and HIP, Student t test, p < 0.05; IL-17 and CX3CL1 in the C and HIP, p < 0.05). IL-10 levels, in contrast, were decreased in both the colon (p < 0.05) and the brain (C and HIP, p < 0.05). Conclusions: The results obtained in the present work support the notion that tumor growth induces neuroinflammation in stress-related brain regions and depressive-like behavior, which could be related to the high incidence of depression in colorectal carcinogenesis. This work have important clinical and research implications, taken into account that cytokine levels may be a marker promissory for the developing depression in CRC patients. New therapeutic strategies to assist in alleviating mental suffering in cancer patients might result from a better understanding of the role of cytokines in the pathophysiology of depression in these subjects.Keywords: cytokines, brain, depression, colorectal cancer
Procedia PDF Downloads 270755 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 107754 Smart BIM Documents - the Development of the Ontology-Based Tool for Employer Information Requirements (OntEIR), and its Transformation into SmartEIR
Authors: Shadan Dwairi
Abstract:
Defining proper requirements is one of the key factors for a successful construction projects. Although there have been many attempts put forward in assist in identifying requirements, but still this area is under developed. In Buildings Information Modelling (BIM) projects. The Employer Information Requirements (EIR) is the fundamental requirements document and a necessary ingredient in achieving a successful BIM project. The provision on full and clear EIR is essential to achieving BIM Level-2. As Defined by PAS 1192-2, EIR is a “pre-tender document that sets out the information to be delivered and the standards and processes to be adopted by the supplier as part of the project delivery process”. It also notes that “EIR should be incorporated into tender documentation to enable suppliers to produce an initial BIM Execution Plan (BEP)”. The importance of effective definition of EIR lies in its contribution to a better productivity during the construction process in terms of cost and time, in addition to improving the quality of the built asset. Proper and clear information is a key aspect of the EIR, in terms of the information it contains and more importantly the information the client receives at the end of the project that will enable the effective management and operation of the asset, where typically about 60%-80% of the cost is spent. This paper reports on the research done in developing the Ontology-based tool for Employer Information Requirements (OntEIR). OntEIR has proven the ability to produce a full and complete set of EIRs, which ensures that the clients’ information needs for the final model delivered by BIM is clearly defined from the beginning of the process. It also reports on the work being done into transforming OntEIR into a smart tool for Defining Employer Information Requirements (smartEIR). smartEIR transforms the OntEIR tool into enabling it to develop custom EIR- tailored for the: Project Type, Project Requirements, and the Client Capabilities. The initial idea behind smartEIR is moving away from the notion “One EIR fits All”. smartEIR utilizes the links made in OntEIR and creating a 3D matrix that transforms it into a smart tool. The OntEIR tool is based on the OntEIR framework that utilizes both Ontology and the Decomposition of Goals to elicit and extract the complete set of requirements needed for a full and comprehensive EIR. A new ctaegorisation system for requirements is also introduced in the framework and tool, which facilitates the understanding and enhances the clarification of the requirements especially for novice clients. Findings of the evaluation of the tool that was done with experts in the industry, showed that the OntEIR tool contributes towards effective and efficient development of EIRs that provide a better understanding of the information requirements as requested by BIM, and support the production of a complete BIM Execution Plan (BEP) and a Master Information Delivery Plan (MIDP).Keywords: building information modelling, employer information requirements, ontology, web-based, tool
Procedia PDF Downloads 126753 Investigation of the IL23R Psoriasis/PsA Susceptibility Locus
Authors: Shraddha Rane, Richard Warren, Stephen Eyre
Abstract:
L-23 is a pro-inflammatory molecule that signals T cells to release cytokines such as IL-17A and IL-22. Psoriasis is driven by a dysregulated immune response, within which IL-23 is now thought to play a key role. Genome-wide association studies (GWAS) have identified a number of genetic risk loci that support the involvement of IL-23 signalling in psoriasis; in particular a robust susceptibility locus at a gene encoding a subunit of the IL-23 receptor (IL23R) (Stuart et al., 2015; Tsoi et al., 2012). The lead psoriasis-associated SNP rs9988642 is located approximately 500 bp downstream of IL23R but is in tight linkage disequilibrium (LD) with a missense SNP rs11209026 (R381Q) within IL23R (r2 = 0.85). The minor (G) allele of rs11209026 is present in approximately 7% of the population and is protective for psoriasis and several other autoimmune diseases including IBD, ankylosing spondylitis, RA and asthma. The psoriasis-associated missense SNP R381Q causes an arginine to glutamine substitution in a region of the IL23R protein between the transmembrane domain and the putative JAK2 binding site in the cytoplasmic portion. This substitution is expected to affect the receptor’s surface localisation or signalling ability, rather than IL23R expression. Recent studies have also identified a psoriatic arthritis (PsA)-specific signal at IL23R; thought to be independent from the psoriasis association (Bowes et al., 2015; Budu-Aggrey et al., 2016). The lead PsA-associated SNP rs12044149 is intronic to IL23R and is in LD with likely causal SNPs intersecting promoter and enhancer marks in memory CD8+ T cells (Budu-Aggrey et al., 2016). It is therefore likely that the PsA-specific SNPs affect IL23R function via a different mechanism compared with the psoriasis-specific SNPs. It could be hypothesised that the risk allele for PsA located within the IL23R promoter causes an increase IL23R expression, relative to the protective allele. An increased expression of IL23R might then lead to an exaggerated immune response. The independent genetic signals identified for psoriasis and PsA in this locus indicate that different mechanisms underlie these two conditions; although likely both affecting the function of IL23R. It is very important to further characterise these mechanisms in order to better understand how the IL-23 receptor and its downstream signalling is affected in both diseases. This will help to determine how psoriasis and PsA patients might differentially respond to therapies, particularly IL-23 biologics. To investigate this further we have developed an in vitro model using CD4 T cells which express either wild type IL23R and IL12Rβ1 or mutant IL23R (R381Q) and IL12Rβ1. Model expressing different isotypes of IL23R is also underway to investigate the effects on IL23R expression. We propose to further investigate the variants for Ps and PsA and characterise key intracellular processes related to the variants.Keywords: IL23R, psoriasis, psoriatic arthritis, SNP
Procedia PDF Downloads 167752 Challenges & Barriers for Neuro Rehabilitation in Developing Countries
Authors: Muhammad Naveed Babur, Maria Liaqat
Abstract:
Background & Objective: People with disabilities especially neurological disabilities have many unmet health and rehabilitation needs, face barriers in accessing mainstream health-care services, and consequently have poor health. There are not sufficient epidemiological studies from Pakistan which assess barriers to neurorehabilitation and ways to counter it. Objectives: The objective of the study was to determine the challenges and to evaluate the barriers for neuro-rehabilitation services in developing countries. Methods: This is Exploratory sequential qualitative study based on the Panel discussion forum in International rehabilitation sciences congress and national rehabilitation conference 2017. Panel group discussion has been conducted in February 2017 with a sample size of eight professionals including Rehabilitation medicine Physician, Physical Therapist, Speech Language therapist, Occupational Therapist, Clinical Psychologist and rehabilitation nurse working in multidisciplinary/Interdisciplinary team. A comprehensive audio-videography have been developed, recorded, transcripted and documented. Data was transcribed and thematic analysis along with characteristics was drawn manually. Data verification was done with the help of two separate coders. Results: After extraction of two separate coders following results are emerged. General category themes are disease profile, demographic profile, training and education, research, barriers, governance, global funding, informal care, resources and cultural beliefs and public awareness. Barriers identified at the level are high cost, stigma, lengthy course of recovery. Hospital related barriers are lack of social support and individually tailored goal setting processes. Organizational barriers identified are lack of basic diagnostic facilities, lack of funding and human resources. Recommendations given by panelists were investment in education, capacity building, infrastructure, governance support, strategies to promote communication and realistic goals. Conclusion: It is concluded that neurorehabilitation in developing countries need attention in following categories i.e. disease profile, demographic profile, training and education, research, barriers, governance, global funding, informal care, resources and cultural beliefs and public awareness. This study also revealed barriers at the level of patient, hospital, organization. Recommendations were also given by panelists.Keywords: disability, neurorehabilitation, telerehabilitation, disability
Procedia PDF Downloads 191751 Phytomining for Rare Earth Elements: A Comparative Life Cycle Assessment
Authors: Mohsen Rabbani, Trista McLaughlin, Ehsan Vahidi
Abstract:
the remediation of polluted sites with heavy metals, such as rare earth elements (REEs), has been a primary concern of researchers to decontaminate the soil. Among all developed methods to address this concern, phytoremediation has been established as efficient, cost-effective, easy-to-use, and environmentally friendly way, providing a long-term solution for addressing this global concern. Furthermore, this technology has another great potential application in the metals production sector through returning metals buried in soil via metals cropping. Considering the significant metal concentration in hyper-accumulators, the utilization of bioaccumulated metals to extract metals from plant matter has been proposed as a sub-economic area called phytomining. As a recent, more advanced technology to eliminate such pollutants from the soil and produce critical metals, bioharvesting (phytomining/agromining) has been considered another compromising way to produce metals and meet the global demand for critical/target metals. The bio-ore obtained from phytomining can be safely disposed of or introduced to metal production pathways to obtain the most demanded metals, such as REEs. It is well-known that some hyperaccumulators, e.g., fern Dicranopteris linearis, can be used to absorb REE metals from the polluted soils and accumulate them in plant organs, such as leaves and stems. After soil remediation, the plant species can be harvested and introduced to the downstream steps, namely crushing/grinding, leaching, and purification processes, to extract REEs from plant matter. This novel interdisciplinary field can fill the gap between agriculture, mining, metallurgy, and the environment. Despite the advantages of agromining for the REEs production industry, key issues related to the environmental sustainability of the entire life cycle of this new concept have not been assessed yet. Hence, a comparative life cycle assessment (LCA) study was conducted to quantify the environmental footprints of REEs phytomining. The current LCA study aims to estimate and calculate environmental effects associated with phytomining by considering critical factors, such as climate change, land use, and ozone depletion. The results revealed that phytomining is an easy-to-use and environmentally sustainable approach to either eliminate REEs from polluted sites or produce REEs, offering a new source of such metals production. This LCA research provides guidelines for researchers active in developing a reliable relationship between agriculture, mining, metallurgy, and the environment to encounter soil pollution and keep the earth green and clean.Keywords: phytoremediation, phytomining, life cycle assessment, environmental impacts, rare earth elements, hyperaccumulator
Procedia PDF Downloads 68750 Effects of Learner-Content Interaction Activities on the Context of Verbal Learning Outcomes in Interactive Courses
Authors: Alper Tolga Kumtepe, Erdem Erdogdu, M. Recep Okur, Eda Kaypak, Ozlem Kaya, Serap Ugur, Deniz Dincer, Hakan Yildirim
Abstract:
Interaction is one of the most important components of open and distance learning. According to Moore, who proposed one of the keystones on interaction types, there are three basic types of interaction: learner-teacher, learner-content, and learner-learner. From these interaction types, learner-content interaction, without doubt, can be identified as the most fundamental one on which all education is based. Efficacy, efficiency, and attraction of open and distance learning systems can be achieved by the practice of effective learner-content interaction. With the development of new technologies, interactive e-learning materials have been commonly used as a resource in open and distance learning, along with the printed books. The intellectual engagement of the learners with the content that is course materials may also affect their satisfaction for the open and distance learning practices in general. Learner satisfaction holds an important place in open and distance learning since it will eventually contribute to the achievement of learning outcomes. Using the learner-content interaction activities in course materials, Anadolu University, by its Open Education system, tries to involve learners in deep and meaningful learning practices. Especially, during the e-learning material design and production processes, identifying appropriate learner-content interaction activities within the context of learning outcomes holds a big importance. Considering the lack of studies adopting this approach, as well as its being a study on the use of e-learning materials in Open Education system, this research holds a big value in open and distance learning literature. In this respect, the present study aimed to investigate a) which learner-content interaction activities included in interactive courses are the most effective in learners’ achievement of verbal information learning outcomes and b) to what extent distance learners are satisfied with these learner-content interaction activities. For this study, the quasi-experimental research design was adopted. The 120 participants of the study were from Anadolu University Open Education Faculty students living in Eskişehir. The students were divided into 6 groups randomly. While 5 of these groups received different learner-content interaction activities as a part of the experiment, the other group served as the control group. The data were collected mainly through two instruments: pre-test and post-test. In addition to those tests, learners’ perceived learning was assessed with an item at the end of the program. The data collected from pre-test and post-test were analyzed by ANOVA, and in the light of the findings of this approximately 24-month study, suggestions for the further design of e-learning materials within the context of learner-content interaction activities will be provided at the conference. The current study is planned to be an antecedent for the following studies that will examine the effects of activities on other learning domains.Keywords: interaction, distance education, interactivity, online courses
Procedia PDF Downloads 193