Search results for: developed stream power equation
864 Efficient Synthesis of Highly Functionalized Biologically Important Spirocarbocyclic Oxindoles via Hauser Annulation
Authors: Kanduru Lokesh, Venkitasamy Kesavan
Abstract:
The unique structural features of spiro-oxindoles with diverse biological activities have made them privileged structures in new drug discovery. The key structural characteristic of these compounds is the spiro ring fused at the C-3 position of the oxindole core with varied heterocyclic motifs. Structural diversification of heterocyclic scaffolds to synthesize new chemical entities as pharmaceuticals and agrochemicals is one of the important goals of synthetic organic chemists. Nitrogen and oxygen containing heterocycles are by far the most widely occurring privileged structures in medicinal chemistry. The structural complexity and distinct three-dimensional arrangement of functional groups of these privileged structures are generally responsible for their specificity against biological targets. Structurally diverse compound libraries have proved to be valuable assets for drug discovery against challenging biological targets. Thus, identifying a new combination of substituents at C-3 position on oxindole moiety is of great importance in drug discovery to improve the efficiency and efficacy of the drugs. The development of suitable methodology for the synthesis of spiro-oxindole compounds has attracted much interest often in response to the significant biological activity displayed by the both natural and synthetic compounds. So creating structural diversity of oxindole scaffolds is need of the decade and formidable challenge. A general way to improve synthetic efficiency and also to access diversified molecules is through the annulation reactions. Annulation reactions allow the formation of complex compounds starting from simple substrates in a single transformation consisting of several steps in an ecologically and economically favorable way. These observations motivated us to develop the annulation reaction protocol to enable the synthesis of a new class of spiro-oxindole motifs which in turn would enable the enhancement of molecular diversity. As part of our enduring interest in the development of novel, efficient synthetic strategies to enable the synthesis of biologically important oxindole fused spirocarbocyclic systems, We have developed an efficient methodology for the construction of highly functionalized spirocarbocyclic oxindoles through [4+2] annulation of phthalides via Hauser annulation. functionalized spirocarbocyclic oxindoles was accomplished for the first time in the literature using Hauser annulation strategy. The reaction between methyleneindolinones and arylsulfonylphthalides catalyzed by cesium carbonate led to the access of new class of biologically important spiro[indoline-3,2'-naphthalene] derivatives in very good yields. The synthetic utility of the annulated product was further demonstrated by fluorination Using NFSI as a fluorinating agent to furnish corresponding fluorinated product.Keywords: Hauser-Kraus annulation, spiro carbocyclic oxindoles, oxindole-ester, fluoridation
Procedia PDF Downloads 198863 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 97862 Detection of Some Drugs of Abuse from Fingerprints Using Liquid Chromatography-Mass Spectrometry
Authors: Ragaa T. Darwish, Maha A. Demellawy, Haidy M. Megahed, Doreen N. Younan, Wael S. Kholeif
Abstract:
The testing of drug abuse is authentic in order to affirm the misuse of drugs. Several analytical approaches have been developed for the detection of drugs of abuse in pharmaceutical and common biological samples, but few methodologies have been created to identify them from fingerprints. Liquid Chromatography-Mass Spectrometry (LC-MS) plays a major role in this field. The current study aimed at assessing the possibility of detection of some drugs of abuse (tramadol, clonazepam, and phenobarbital) from fingerprints using LC-MS in drug abusers. The aim was extended in order to assess the possibility of detection of the above-mentioned drugs in fingerprints of drug handlers till three days of handling the drugs. The study was conducted on randomly selected adult individuals who were either drug abusers seeking treatment at centers of drug dependence in Alexandria, Egypt or normal volunteers who were asked to handle the different studied drugs (drug handlers). An informed consent was obtained from all individuals. Participants were classified into 3 groups; control group that consisted of 50 normal individuals (neither abusing nor handling drugs), drug abuser group that consisted of 30 individuals who abused tramadol, clonazepam or phenobarbital (10 individuals for each drug) and drug handler group that consisted of 50 individuals who were touching either the powder of drugs of abuse: tramadol, clonazepam or phenobarbital (10 individuals for each drug) or the powder of the control substances which were of similar appearance (white powder) and that might be used in the adulteration of drugs of abuse: acetyl salicylic acid and acetaminophen (10 individuals for each drug). Samples were taken from the handler individuals for three consecutive days for the same individual. The diagnosis of drug abusers was based on the current Diagnostic and Statistical Manual of Mental disorders (DSM-V) and urine screening tests using immunoassay technique. Preliminary drug screening tests of urine samples were also done for drug handlers and the control groups to indicate the presence or absence of the studied drugs of abuse. Fingerprints of all participants were then taken on a filter paper previously soaked with methanol to be analyzed by LC-MS using SCIEX Triple Quad or QTRAP 5500 System. The concentration of drugs in each sample was calculated using the regression equations between concentration in ng/ml and peak area of each reference standard. All fingerprint samples from drug abusers showed positive results with LC-MS for the tested drugs, while all samples from the control individuals showed negative results. A significant difference was noted between the concentration of the drugs and the duration of abuse. Tramadol, clonazepam, and phenobarbital were also successfully detected from fingerprints of drug handlers till 3 days of handling the drugs. The mean concentration of the chosen drugs of abuse among the handlers group decreased when the days of samples intake increased.Keywords: drugs of abuse, fingerprints, liquid chromatography–mass spectrometry, tramadol
Procedia PDF Downloads 121861 Connecting the Dots: Bridging Academia and National Community Partnerships When Delivering Healthy Relationships Programming
Authors: Nicole Vlasman, Karamjeet Dhillon
Abstract:
Over the past four years, the Healthy Relationships Program has been delivered in community organizations and schools across Canada. More than 240 groups have been facilitated in collaboration with 33 organizations. As a result, 2157 youth have been engaged in the programming. The purpose and scope of the Healthy Relationships Program are to offer sustainable, evidence-based skills through small group implementation to prevent violence and promote positive, healthy relationships in youth. The program development has included extensive networking at regional and national levels. The Healthy Relationships Program is currently being implemented, adapted, and researched within the Resilience and Inclusion through Strengthening and Enhancing Relationships (RISE-R) project. Alongside the project’s research objectives, the RISE-R team has worked to virtually share the ongoing findings of the project through a slow ontology approach. Slow ontology is a practice integrated into project systems and structures whereby slowing the pace and volume of outputs offers creative opportunities. Creative production reveals different layers of success and complements the project, the building blocks for sustainability. As a result of integrating a slow ontology approach, the RISE-R team has developed a Geographic Information System (GIS) that documents local landscapes through a Story Map feature, and more specifically, video installations. Video installations capture the cartography of space and place within the context of singular diverse community spaces (case studies). By documenting spaces via human connections, the project captures narratives, which further enhance the voices and faces of the community within the larger project scope. This GIS project aims to create a visual and interactive flow of information that complements the project's mixed-method research approach. Conclusively, creative project development in the form of a geographic information system can provide learning and engagement opportunities at many levels (i.e., within community organizations and educational spaces or with the general public). In each of these disconnected spaces, fragmented stories are connected through a visual display of project outputs. A slow ontology practice within the context of the RISE-R project documents activities on the fringes and within internal structures; primarily through documenting project successes as further contributions to the Centre for School Mental Health framework (philosophy, recruitment techniques, allocation of resources and time, and a shared commitment to evidence-based products).Keywords: community programming, geographic information system, project development, project management, qualitative, slow ontology
Procedia PDF Downloads 155860 Assessing the Impact of Frailty in Elderly Patients Undergoing Emergency Laparotomies in Singapore
Authors: Zhao Jiashen, Serene Goh, Jerry Goo, Anthony Li, Lim Woan Wui, Paul Drakeford, Chen Qing Yan
Abstract:
Introduction: Emergency laparotomy (EL) is one of the most common surgeries done in Singapore to treat acute abdominal pathologies. A significant proportion of these surgeries are performed in the geriatric population (65 years and older), who tend to have the highest postoperative morbidity, mortality, and highest utilization of intensive care resources. Frailty, the state of vulnerability to adverse outcomes from an accumulation of physiological deficits, has been shown to be associated with poorer outcomes after surgery and remains a strong driver of healthcare utilization and costs. To date, there is little understanding of the impact it has on emergency laparotomy outcomes. The objective of this study is to examine the impact of frailty on postoperative morbidity, mortality, and length of stay after EL. Methods: A retrospective study was conducted in two tertiary centres in Singapore, Tan Tock Seng Hospital and Khoo Teck Puat Hospital the period from January to December 2019. Patients aged 65 years and above who underwent emergency laparotomy for intestinal obstruction, perforated viscus, bowel ischaemia, adhesiolysis, gastrointestinal bleed, or another suspected acute abdomen were included. Laparotomies performed for trauma, cholecystectomy, appendectomy, vascular surgery, and non-GI surgery were excluded. The Clinical Frailty Score (CFS) developed by the Canadian Study of Health and Aging (CSHA) was used. A score of 1 to 4 was defined as non-frail and 5 to 7 as frail. We compared the clinical outcomes of elderly patients in the frail and non-frail groups. Results: There were 233 elderly patients who underwent EL during the study period. Up to 26.2% of patients were frail. Patients who were frail (CFS 5-9) tend to be older, 79 ± 7 vs 79 ± 5 years of age, p <0.01. Gender distribution was equal in both groups. Indication for emergency laparotomies, time from diagnosis to surgery, and presence of consultant surgeons and anaesthetists in the operating theatre were comparable (p>0.05). Patients in the frail group were more likely to receive postoperative geriatric assessment than in the non-frail group, 49.2% vs. 27.9% (p<0.01). The postoperative complications were comparable (p>0.05). The length of stay in the critical care unit was longer for the frail patients, 2 (IQR 1-6.5) versus 1 (IQR 0-4) days, p<0.01. Frailty was found to be an independent predictor of 90-day mortality but not age, OR 2.9 (1.1-7.4), p=0.03. Conclusion: Up to one-fourth of the elderly who underwent EL were frail. Patients who were frail were associated with a longer length of stay in the critical care unit and a 90-day mortality rate of more than three times that of their non-frail counterparts. PPOSSUM was a better predictor of 90-day mortality in the non-frail group than in the frail group. As frailty scoring was a significant predictor of 90-day mortality, its integration into acute surgical units to facilitate shared decision-making and discharge planning should be considered.Keywords: frailty elderly, emergency, laparotomy
Procedia PDF Downloads 145859 Social and Educational AI for Diversity: Research on Democratic Values to Develop Artificial Intelligence Tools to Guarantee Access for all to Educational Tools and Public Services
Authors: Roberto Feltrero, Sara Osuna-Acedo
Abstract:
Responsible Research and Innovation have to accomplish one fundamental aim: everybody has to participate in the benefits of innovation, but also innovation has to be democratic; that is to say, everybody may have the possibility to participate in the decisions in the innovation process. Particularly, a democratic and inclusive model of social participation and innovation includes persons with disabilities and people at risk of discrimination. Innovations on Artificial Intelligence for social development have to accomplish the same dual goal: improving equality for accessing fields of public interest like education, training and public services, as well as improving civic and democratic participation in the process of developing such innovations for all. This research aims to develop innovations, policies and policy recommendations to apply and disseminate such artificial intelligence and social model for making educational and administrative processes more accessible. First, designing a citizen participation process to engage citizens in the designing and use of artificial intelligence tools for public services. This will result in improving trust in democratic institutions contributing to enhancing the transparency, effectiveness, accountability and legitimacy of public policy-making and allowing people to participate in the development of ethical standards for the use of such technologies. Second, improving educational tools for lifelong learning with AI models to improve accountability and educational data management. Dissemination, education and social participation will be integrated, measured and evaluated in innovative educational processes to make accessible all the educational technologies and content developed on AI about responsible and social innovation. A particular case will be presented regarding access for all to educational tools and public services. This accessibility requires cognitive adaptability because, many times, legal or administrative language is very complex. Not only for people with cognitive disabilities but also for old people or citizens at risk of educational or social discrimination. Artificial Intelligence natural language processing technologies can provide tools to translate legal, administrative, or educational texts to a more simple language that can be accessible to everybody. Despite technological advances in language processing and machine learning, this becomes a huge project if we really want to respect ethical and legal consequences because that kinds of consequences can only be achieved with civil and democratic engagement in two realms: 1) to democratically select texts that need and can be translated and 2) to involved citizens, experts and nonexperts, to produce and validate real examples of legal texts with cognitive adaptations to feed artificial intelligence algorithms for learning how to translate those texts to a more simple and accessible language, adapted to any kind of population.Keywords: responsible research and innovation, AI social innovations, cognitive accessibility, public participation
Procedia PDF Downloads 90858 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 84857 Characterization of the MOSkin Dosimeter for Accumulated Dose Assessment in Computed Tomography
Authors: Lenon M. Pereira, Helen J. Khoury, Marcos E. A. Andrade, Dean L. Cutajar, Vinicius S. M. Barros, Anatoly B. Rozenfeld
Abstract:
With the increase of beam widths and the advent of multiple-slice and helical scanners, concerns related to the current dose measurement protocols and instrumentation in computed tomography (CT) have arisen. The current methodology of dose evaluation, which is based on the measurement of the integral of a single slice dose profile using a 100 mm long cylinder ionization chamber (Ca,100 and CPPMA, 100), has been shown to be inadequate for wide beams as it does not collect enough of the scatter-tails to make an accurate measurement. In addition, a long ionization chamber does not offer a good representation of the dose profile when tube current modulation is used. An alternative approach has been suggested by translating smaller detectors through the beam plane and assessing the accumulated dose trough the integral of the dose profile, which can be done for any arbitrary length in phantoms or in the air. For this purpose, a MOSFET dosimeter of small dosimetric volume was used. One of its recently designed versions is known as the MOSkin, which is developed by the Centre for Medical Radiation Physics at the University of Wollongong, and measures the radiation dose at a water equivalent depth of 0.07 mm, allowing the evaluation of skin dose when placed at the surface, or internal point doses when placed within a phantom. Thus, the aim of this research was to characterize the response of the MOSkin dosimeter for X-ray CT beams and to evaluate its application for the accumulated dose assessment. Initially, tests using an industrial x-ray unit were carried out at the Laboratory of Ionization Radiation Metrology (LMRI) of Federal University of Pernambuco, in order to investigate the sensitivity, energy dependence, angular dependence, and reproducibility of the dose response for the device for the standard radiation qualities RQT 8, RQT 9 and RQT 10. Finally, the MOSkin was used for the accumulated dose evaluation of scans using a Philips Brilliance 6 CT unit, with comparisons made between the CPPMA,100 value assessed with a pencil ionization chamber (PTW Freiburg TW 30009). Both dosimeters were placed in the center of a PMMA head phantom (diameter of 16 cm) and exposed in the axial mode with collimation of 9 mm, 250 mAs and 120 kV. The results have shown that the MOSkin response was linear with doses in the CT range and reproducible (98.52%). The sensitivity for a single MOSkin in mV/cGy was as follows: 9.208, 7.691 and 6.723 for the RQT 8, RQT 9 and RQT 10 beams qualities respectively. The energy dependence varied up to a factor of ±1.19 among those energies and angular dependence was not greater than 7.78% within the angle range from 0 to 90 degrees. The accumulated dose and the CPMMA, 100 value were 3,97 and 3,79 cGy respectively, which were statistically equivalent within the 95% confidence level. The MOSkin was shown to be a good alternative for CT dose profile measurements and more than adequate to provide accumulated dose assessments for CT procedures.Keywords: computed tomography dosimetry, MOSFET, MOSkin, semiconductor dosimetry
Procedia PDF Downloads 311856 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation
Authors: Helen J. Spray, Karina J. Linnell
Abstract:
Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization
Procedia PDF Downloads 139855 Nudging the Criminal Justice System into Listening to Crime Victims in Plea Agreements
Authors: Dana Pugach, Michal Tamir
Abstract:
Most criminal cases end with a plea agreement, an issue whose many aspects have been discussed extensively in legal literature. One important feature, however, has gained little notice, and that is crime victims’ place in plea agreements following the federal Crime Victims Rights Act of 2004. This law has provided victims some meaningful and potentially revolutionary rights, including the right to be heard in the proceeding and a right to appeal against a decision made while ignoring the victim’s rights. While victims’ rights literature has always emphasized the importance of such right, references to this provision in the general literature about plea agreements are sparse, if existing at all. Furthermore, there are a few cases only mentioning this right. This article purports to bridge between these two bodies of legal thinking – the vast literature concerning plea agreements and victims’ rights research– by using behavioral economics. The article will, firstly, trace the possible structural reasons for the failure of this right to be materialized. Relevant incentives of all actors involved will be identified as well as their inherent consequential processes that lead to the victims’ rights malfunction. Secondly, the article will use nudge theory in order to suggest solutions that will enhance incentives for the repeat players in the system (prosecution, judges, defense attorneys) and lead to the strengthening of weaker group’s interests – the crime victims. Behavioral psychology literature recognizes that the framework in which an individual confronts a decision can significantly influence his decision. Richard Thaler and Cass Sunstein developed the idea of ‘choice architecture’ - ‘the context in which people make decisions’ - which can be manipulated to make particular decisions more likely. Choice architectures can be changed by adjusting ‘nudges,’ influential factors that help shape human behavior, without negating their free choice. The nudges require decision makers to make choices instead of providing a familiar default option. In accordance with this theory, we suggest a rule, whereby a judge should inquire the victim’s view prior to accepting the plea. This suggestion leaves the judge’s discretion intact; while at the same time nudges her not to go directly to the default decision, i.e. automatically accepting the plea. Creating nudges that force actors to make choices is particularly significant when an actor intends to deviate from routine behaviors but experiences significant time constraints, as in the case of judges and plea bargains. The article finally recognizes some far reaching possible results of the suggestion. These include meaningful changes to the earlier stages of criminal process even before reaching court, in line with the current criticism of the plea agreements machinery.Keywords: plea agreements, victims' rights, nudge theory, criminal justice
Procedia PDF Downloads 322854 The Use of Social Stories and Digital Technology as Interventions for Autistic Children; A State-Of-The-Art Review and Qualitative Data Analysis
Authors: S. Hussain, C. Grieco, M. Brosnan
Abstract:
Background and Aims: Autism is a complex neurobehavioural disorder, characterised by impairments in the development of language and communication skills. The study involved a state-of-art systematic review, in addition to qualitative data analysis, to establish the evidence for social stories as an intervention strategy for autistic children. An up-to-date review of the use of digital technologies in the delivery of interventions to autistic children was also carried out; to propose the efficacy of digital technologies and the use of social stories to improve intervention outcomes for autistic children. Methods: Two student researchers reviewed a range of randomised control trials and observational studies. The aim of the review was to establish if there was adequate evidence to justify recommending social stories to autistic patients. Students devised their own search strategies to be used across a range of search engines, including Ovid-Medline, Google Scholar and PubMed. Students then critically appraised the generated literature. Additionally, qualitative data obtained from a comprehensive online questionnaire on social stories was also thematically analysed. The thematic analysis was carried out independently by each researcher, using a ‘bottom-up’ approach, meaning contributors read and analysed responses to questions and devised semantic themes from reading the responses to a given question. The researchers then placed each response into a semantic theme or sub-theme. The students then joined to discuss the merging of their theme headings. The Inter-rater reliability (IRR) was calculated before and after theme headings were merged, giving IRR for pre- and post-discussion. Lastly, the thematic analysis was assessed by a third researcher, who is a professor of psychology and the director for the ‘Centre for Applied Autism Research’ at the University of Bath. Results: A review of the literature, as well as thematic analysis of qualitative data found supporting evidence for social story use. The thematic analysis uncovered some interesting themes from the questionnaire responses, relating to the reasons why social stories were used and the factors influencing their effectiveness in each case. However, overall, the evidence for digital technologies interventions was limited, and the literature could not prove a causal link between better intervention outcomes for autistic children and the use of technologies. However, they did offer valid proposed theories for the suitability of digital technologies for autistic children. Conclusions: Overall, the review concluded that there was adequate evidence to justify advising the use of social stories with autistic children. The role of digital technologies is clearly a fast-emerging field and appears to be a promising method of intervention for autistic children; however, it should not yet be considered an evidence-based approach. The students, using this research, developed ideas on social story interventions which aim to help autistic children.Keywords: autistic children, digital technologies, intervention, social stories
Procedia PDF Downloads 121853 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 162852 Carbonyl Iron Particles Modified with Pyrrole-Based Polymer and Electric and Magnetic Performance of Their Composites
Authors: Miroslav Mrlik, Marketa Ilcikova, Martin Cvek, Josef Osicka, Michal Sedlacik, Vladimir Pavlinek, Jaroslav Mosnacek
Abstract:
Magnetorheological elastomers (MREs) are a unique type of materials consisting of two components, magnetic filler, and elastomeric matrix. Their properties can be tailored upon application of an external magnetic field strength. In this case, the change of the viscoelastic properties (viscoelastic moduli, complex viscosity) are influenced by two crucial factors. The first one is magnetic performance of the particles and the second one is off-state stiffness of the elastomeric matrix. The former factor strongly depends on the intended applications; however general rule is that higher magnetic performance of the particles provides higher MR performance of the MRE. Since magnetic particles possess low stability properties against temperature and acidic environment, several methods how to improve these drawbacks have been developed. In the most cases, the preparation of the core-shell structures was employed as a suitable method for preservation of the magnetic particles against thermal and chemical oxidations. However, if the shell material is not single-layer substance, but polymer material, the magnetic performance is significantly suppressed, due to the in situ polymerization technique, when it is very difficult to control the polymerization rate and the polymer shell is too thick. The second factor is the off-state stiffness of the elastomeric matrix. Since the MR effectivity is calculated as the relative value of the elastic modulus upon magnetic field application divided by elastic modulus in the absence of the external field, also the tuneability of the cross-linking reaction is highly desired. Therefore, this study is focused on the controllable modification of magnetic particles using a novel monomeric system based on 2-(1H-pyrrol-1-yl)ethyl methacrylate. In this case, the short polymer chains of different chain lengths and low polydispersity index will be prepared, and thus tailorable stability properties can be achieved. Since the relatively thin polymer chains will be grafted on the surface of magnetic particles, their magnetic performance will be affected only slightly. Furthermore, also the cross-linking density will be affected, due to the presence of the short polymer chains. From the application point of view, such MREs can be utilized for, magneto-resistors, piezoresistors or pressure sensors especially, when the conducting shell on the magnetic particles will be created. Therefore, the selection of the pyrrole-based monomer is very crucial and controllably thin layer of conducting polymer can be prepared. Finally, such composite particle consisting of magnetic core and conducting shell dispersed in elastomeric matrix can find also the utilization in shielding application of electromagnetic waves.Keywords: atom transfer radical polymerization, core-shell, particle modification, electromagnetic waves shielding
Procedia PDF Downloads 209851 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 150850 Approaching a Tat-Rev Independent HIV-1 Clone towards a Model for Research
Authors: Walter Vera-Ortega, Idoia Busnadiego, Sam J. Wilson
Abstract:
Introduction: Human Immunodeficiency Virus type 1 (HIV-1) is responsible for the acquired immunodeficiency syndrome (AIDS), a leading cause of death worldwide infecting millions of people each year. Despite intensive research in vaccine development, therapies against HIV-1 infection are not curative, and the huge genetic variability of HIV-1 challenges to drug development. Current animal models for HIV-1 research present important limitations, impairing the progress of in vivo approaches. Macaques require a CD8+ depletion to progress to AIDS, and the maintenance cost is high. Mice are a cheaper alternative but need to be 'humanized,' and breeding is not possible. The development of an HIV-1 clone able to replicate in mice is a challenging proposal. The lack of human co-factors in mice impedes the function of the HIV-1 accessory proteins, Tat and Rev, hampering HIV-1 replication. However, Tat and Rev function can be replaced by constitutive/chimeric promoters, codon-optimized proteins and the constitutive transport element (CTE), generating a novel HIV-1 clone able to replicate in mice without disrupting the amino acid sequence of the virus. By minimally manipulating the genomic 'identity' of the virus, we propose the generation of an HIV-1 clone able to replicate in mice to assist in antiviral drug development. Methods: i) Plasmid construction: The chimeric promoters and CTE copies were cloned by PCR using lentiviral vectors as templates (pCGSW and pSIV-MPCG). Tat mutants were generated from replication competent HIV-1 plasmids (NHG and NL4-3). ii) Infectivity assays: Retroviral vectors were generated by transfection of human 293T cells and murine NIH 3T3 cells. Virus titre was determined by flow cytometry measuring GFP expression. Human B-cells (AA-2) and Hela cells (TZMbl) were used for infectivity assays. iii) Protein analysis: Tat protein expression was determined by TZMbl assay and HIV-1 capsid by western blot. Results: We have determined that NIH 3T3 cells are able to generate HIV-1 particles. However, they are not infectious, and further analysis needs to be performed. Codon-optimized HIV-1 constructs are efficiently made in 293T cells in a Tat and Rev independent manner and capable of packaging a competent genome in trans. CSGW is capable of generating infectious particles in the absence of Tat and Rev in human cells when 4 copies of the CTE are placed preceding the 3’LTR. HIV-1 Tat mutant clones encoding different promoters are functional during the first cycle of replication when Tat is added in trans. Conclusion: Our findings suggest that the development of an HIV-1 Tat-Rev independent clone is challenging but achievable aim. However, further investigations need to be developed prior presenting our HIV-1 clone as a candidate model for research.Keywords: codon-optimized, constitutive transport element, HIV-1, long terminal repeats, research model
Procedia PDF Downloads 308849 Modification of Unsaturated Fatty Acids Derived from Tall Oil Using Micro/Mesoporous Materials Based on H-ZSM-22 Zeolite
Authors: Xinyu Wei, Mingming Peng, Kenji Kamiya, Eika Qian
Abstract:
Iso-stearic acid as a saturated fatty acid with a branched chain shows a low pour point, high oxidative stability and great biodegradability. The industrial production of iso-stearic acid involves first isomerizing unsaturated fatty acids into branched-chain unsaturated fatty acids (BUFAs), followed by hydrogenating the branched-chain unsaturated fatty acids to obtain iso-stearic acid. However, the production yield of iso-stearic acid is reportedly less than 30%. In recent decades, extensive research has been conducted on branched fatty acids. Most research has replaced acidic clays with zeolites due to their high selectivity, good thermal stability, and renewability. It was reported that isomerization of unsaturated fatty acid occurred mainly inside the zeolite channel. In contrast, the production of by-products like dimer acid mainly occurs at acid sites outside the surface of zeolite. Further, the deactivation of catalysts is attributed to the pore blockage of zeolite. In the present study, micro/mesoporous ZSM-22 zeolites were developed. It is clear that the synthesis of a micro/mesoporous ZSM-22 zeolite is regarded as the ideal strategy owing to its ability to minimize coke formation. Different mesoporosities micro/mesoporous H-ZSM-22 zeolites were prepared through recrystallization of ZSM-22 using sodium hydroxide solution (0.2-1M) with cetyltrimethylammonium bromide template (CTAB). The structure, morphology, porosity, acidity, and isomerization performance of the prepared catalysts were characterized and evaluated. The dissolution and recrystallization process of the H-ZSM-22 microporous zeolite led to the formation of approximately 4 nm-sized mesoporous channels on the outer surface of the microporous zeolite, resulting in a micro/mesoporous material. This process increased the weak Brønsted acid sites at the pore mouth while reducing the total number of acid sites in ZSM-22. Finally, an activity test was conducted using oleic acid as a model compound in a fixed-bed reactor. The activity test results revealed that micro/mesoporous H-ZSM-22 zeolites exhibited a high isomerization activity, reaching >70% selectivity and >50% yield of BUFAs. Furthermore, the yield of oligomers was limited to less than 20%. This demonstrates that the presence of mesopores in ZSM-22 enhances contact between the feedstock and the active sites within the catalyst, thereby increasing catalyst activity. Additionally, a portion of the dissolved and recrystallized silica adhered to the catalyst's surface, covering the surface-active sites, which reduced the formation of oligomers. This study offers distinct insights into the production of iso-stearic acid using a fixed-bed reactor, paving the way for future research in this area.Keywords: Iso-stearic acid, oleic acid, skeletal isomerization, micro/mesoporous, ZSM-22
Procedia PDF Downloads 23848 Understanding Face-to-Face Household Gardens’ Profitability and Local Economic Opportunity Pathways
Authors: Annika Freudenberger, Sin Sokhong
Abstract:
In just a few years, the Face-to-Face Victory Gardens Project (F2F) in Cambodia has developed a high-impact project that has provided immediate and tangible benefits to local families. This has been accomplished with a relatively hands-off approach that relies on households’ own motivation and personal investments of time and resources -which is both unique and impressive in the landscape of NGO and government initiatives in the area. Households have been growing food both for their own consumption and to sell or exchange. Not all targeted beneficiaries are equally motivated and maximizing their involvement, but there is a clear subset of households -particularly those who serve as facilitators- whose circumstances have been transformed as a result of F2F. A number of household factors and contextual economic factors affect families’ income generation opportunities. All the households we spoke with became involved with F2F with the goal of selling some proportion of their produce (i.e., not exclusively for their own consumption). For some, this income is marginal and supplemental to their core household income; for others, it is substantial and transformative. Some engage directly with customers/buyers in their immediate community, while others sell in larger nearby markets, and others link up with intermediary vendors. All struggle, to a certain extent, to compete in a local economy flooded with cheap produce imported from large-scale growers in neighboring provinces, Thailand, and Vietnam, although households who grow and sell herbs and greens popular in Khmer cuisine have found a stronger local market. Some are content with the scale of their garden, the income they make, and the current level of effort required to maintain it; others would like to expand but are faced with land constraints and water management challenges. Households making a substantial income from selling their products have achieved success in different ways, making it difficult to pinpoint a clear “model” for replication. Within our small sample size of interviewees, it seems as though the families with a clear passion for their gardens and high motivation to work hard to bring their products to market have succeeded in doing so. Khmer greens and herbs have been the most successful; they are not high-value crops, but they are fairly easy to grow, and there is a constant demand. These crops are also not imported as much, so prices are more stable than those of crops such as long beans. Although we talked to a limited number of individuals, it also appears as though successful families either restricted their crops to those that would grow well in drought or flood conditions (depending on which they are affected by most); or benefit already from water management infrastructure such as water tanks which helps them diversify their crops and helps them build their resilience.Keywords: food security, Victory Gardens, nutrition, Cambodia
Procedia PDF Downloads 56847 Development of the Food Market of the Republic of Kazakhstan in the Field of Milk Processing
Authors: Gulmira Zhakupova, Tamara Tultabayeva, Aknur Muldasheva, Assem Sagandyk
Abstract:
The development of technology and production of products with increased biological value based on the use of natural food raw materials are important tasks in the policy of the food market of the Republic of Kazakhstan. For Kazakhstan, livestock farming, in particular sheep farming, is the most ancient and developed industry and way of life. The history of the Kazakh people is largely connected with this type of agricultural production, with established traditions using dairy products from sheep's milk. Therefore, the development of new technologies from sheep’s milk remains relevant. In addition, one of the most promising areas for the development of food technology for therapeutic and prophylactic purposes is sheep milk products as a source of protein, immunoglobulins, minerals, vitamins, and other biologically active compounds. This article presents the results of research on the study of milk processing technology. The objective of the study is to study the possibilities of processing sheep milk and its role in human nutrition, as well as the results of research to improve the technology of sheep milk products. The studies were carried out on the basis of sanitary and hygienic requirements for dairy products in accordance with the following test methods. To perform microbiological analysis, we used the method for identifying Salmonella bacteria (Horizontal method for identifying, counting, and serotyping Salmonella) in a certain mass or volume of product. Nutritional value is a complex of properties of food products that meet human physiological needs for energy and basic nutrients. The protein mass fraction was determined by the Kjeldahl method. This method is based on the mineralization of a milk sample with concentrated sulfuric acid in the presence of an oxidizing agent, an inert salt - potassium sulfate, and a catalyst - copper sulfate. In this case, the amino groups of the protein are converted into ammonium sulfate dissolved in sulfuric acid. The vitamin composition was determined by HPLC. To determine the content of mineral substances in the studied samples, the method of atomic absorption spectrophotometry was used. The study identified the technological parameters of sheep milk products and determined the prospects for researching sheep milk products. Microbiological studies were used to determine the safety of the study product. According to the results of the microbiological analysis, no deviations from the norm were identified. This means high safety of the products under study. In terms of nutritional value, the resulting products are high in protein. Data on the positive content of amino acids were also obtained. The results obtained will be used in the food industry and will serve as recommendations for manufacturers.Keywords: dairy, milk processing, nutrition, colostrum
Procedia PDF Downloads 57846 Machine Learning for Disease Prediction Using Symptoms and X-Ray Images
Authors: Ravija Gunawardana, Banuka Athuraliya
Abstract:
Machine learning has emerged as a powerful tool for disease diagnosis and prediction. The use of machine learning algorithms has the potential to improve the accuracy of disease prediction, thereby enabling medical professionals to provide more effective and personalized treatments. This study focuses on developing a machine-learning model for disease prediction using symptoms and X-ray images. The importance of this study lies in its potential to assist medical professionals in accurately diagnosing diseases, thereby improving patient outcomes. Respiratory diseases are a significant cause of morbidity and mortality worldwide, and chest X-rays are commonly used in the diagnosis of these diseases. However, accurately interpreting X-ray images requires significant expertise and can be time-consuming, making it difficult to diagnose respiratory diseases in a timely manner. By incorporating machine learning algorithms, we can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The study utilized the Mask R-CNN algorithm, which is a state-of-the-art method for object detection and segmentation in images, to process chest X-ray images. The model was trained and tested on a large dataset of patient information, which included both symptom data and X-ray images. The performance of the model was evaluated using a range of metrics, including accuracy, precision, recall, and F1-score. The results showed that the model achieved an accuracy rate of over 90%, indicating that it was able to accurately detect and segment regions of interest in the X-ray images. In addition to X-ray images, the study also incorporated symptoms as input data for disease prediction. The study used three different classifiers, namely Random Forest, K-Nearest Neighbor and Support Vector Machine, to predict diseases based on symptoms. These classifiers were trained and tested using the same dataset of patient information as the X-ray model. The results showed promising accuracy rates for predicting diseases using symptoms, with the ensemble learning techniques significantly improving the accuracy of disease prediction. The study's findings indicate that the use of machine learning algorithms can significantly enhance disease prediction accuracy, ultimately leading to better patient care. The model developed in this study has the potential to assist medical professionals in diagnosing respiratory diseases more accurately and efficiently. However, it is important to note that the accuracy of the model can be affected by several factors, including the quality of the X-ray images, the size of the dataset used for training, and the complexity of the disease being diagnosed. In conclusion, the study demonstrated the potential of machine learning algorithms for disease prediction using symptoms and X-ray images. The use of these algorithms can improve the accuracy of disease diagnosis, ultimately leading to better patient care. Further research is needed to validate the model's accuracy and effectiveness in a clinical setting and to expand its application to other diseases.Keywords: K-nearest neighbor, mask R-CNN, random forest, support vector machine
Procedia PDF Downloads 155845 Blue Hydrogen Production Via Catalytic Aquathermolysis Coupled with Direct Carbon Dioxide Capture Via Adsorption
Authors: Sherif Fakher
Abstract:
Hydrogen has been gaining a lot of global attention as an uprising contributor in the energy sector. Labeled as an energy carrier, hydrogen is used in many industries and can be used to generate electricity via fuel cells. Blue hydrogen involves the production of hydrogen from hydrocarbons using different processes that emit CO₂. However, the CO₂ is captured and stored. Hence, very little environmental damage occurs during the hydrogen production process. This research investigates the ability to use different catalysts for the production of hydrogen from different hydrocarbon sources, including coal, oil, and gas, using a two-step Aquathermolysis reaction. The research presents the results of experiments conducted to evaluate different catalysts and also highlights the main advantages of this process over other blue hydrogen production methods, including methane steam reforming, autothermal reforming, and oxidation. Two methods of hydrogen generation were investigated including partial oxidation and aquathermolysis. For those two reactions, the reaction kinetics, thermodynamics, and medium were all investigated. Following this, experiments were conducted to test the hydrogen generation potential from both methods. The porous media tested were sandstone, ash, and prozzolanic material. The spent oils used were spent motor oil and spent vegetable oil from cooking. Experiments were conducted at temperatures up to 250 C and pressures up to 3000 psi. Based on the experimental results, mathematical models were developed to predict the hydrogen generation potential at higher thermodynamic conditions. Since both partial oxidation and aquathermolysis require relatively high temperatures to undergo, it was important to devise a method by which these high temperatures can be generated at a low cost. This was done by investigating two factors, including the porous media used and the reliance on the spent oil. Of all the porous media used, the ash had the highest thermal conductivity. The second step was the partial combustion of part of the spent oil to generate the heat needed to reach the high temperatures. This reduced the cost of the heat generation significantly. For the partial oxidation reaction, the spent oil was burned in the presence of a limited oxygen concentration to generate carbon monoxide. The main drawback of this process was the need for burning. This resulted in the generation of other harmful and environmentally damaging gases. Aquathermolysis does not rely on burning, which makes it the cleaner alternative. However, it needs much higher temperatures to run the reaction. When comparing the hydrogen generation potential for both using gas chromatography, aquathermolysis generated 23% more hydrogen using the same volume of spent oil compared to partial oxidation. This research introduces the concept of using spent oil for hydrogen production. This can be a very promising method to produce a clean source of energy using a waste product. This can also help reduce the reliance on freshwater for hydrogen generation which can divert the usage of freshwater to other more important applications.Keywords: blue hydrogen production, catalytic aquathermolysis, direct carbon dioxide capture, CCUS
Procedia PDF Downloads 31844 The 'Toshi-No-Sakon' Phenomenon: A Trend in Japanese Family Formations
Authors: Franco Lorenzo D. Morales
Abstract:
‘Toshi-no-sakon,’ which translates to as ‘age gap marriage,’ is a term that has been popularized by celebrity couples in the Japanese entertainment industry. Japan is distinct for a developed nation for its rapidly aging population, declining marital and fertility rates, and the reinforcement of traditional gender roles. Statistical data has shown that the average age of marriage in Japan is increasing every year, showing a growing tendency for late marriage. As a result, the government has been trying to curb the declining trends by encouraging marriage and childbirth among the populace. This graduate thesis seeks to analyze the ‘toshi-no-sakon’ phenomenon in lieu of Japan’s current economic and social situation, and to see what the implications are for these kinds of married couples. This research also seeks to expound more on age gaps within married couples, which is a factor rarely-touched upon in Japanese family studies. A literature review was first performed in order to provide a framework to study ‘toshi-no-sakon’ from the perspective of four fields of study—marriage, family, aging, and gender. Numerous anonymous online statements by ‘toshi-no-sakon’ couples were then collected and analyzed, which brought to light a number of concerns. Couples wherein the husband is the older partner were prioritized in order to narrow down the focus of the research, and ‘toshi-no-sakon’ is only considered when the couple’s age gap is ten years or more. Current findings suggest that one of the perceived merits for a woman to marry an older man is that financial security would be guaranteed. However, this has been shown to be untrue as a number of couples express concern regarding their financial situation, which could be attributed to their husband’s socio-economic status. Having an older husband who is approaching the age of retirement presents another dilemma as the wife would be more obliged to provide care for her aging husband. This notion of the wife being a caregiver likely stems from an arrangement once common in Japanese families in which the wife must primarily care for her husband’s elderly parents. Childbearing is another concern as couples would be pressured to have a child right away due to the age of the husband, in addition to limiting the couple’s ideal number of children. This is another problematic aspect as the husband would have to provide income until his child has finished their education, implying that retirement would have to be delayed indefinitely. It is highly recommended that future studies conduct face-to-face interviews with couples and families who fall under the category of ‘toshi-no-sakon’ in order to gain a more in-depth perspective into the phenomenon and to reveal any undiscovered trends. Cases wherein the wife is the older partner in the relationship should also be given focus in future studies involving ‘toshi-no-sakon’.Keywords: age gap, family structure, gender roles, marriage trends
Procedia PDF Downloads 364843 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 319842 Correlations and Impacts Of Optimal Rearing Parameters on Nutritional Value Of Mealworm (Tenebrio Molitor)
Authors: Fabienne Vozy, Anick Lepage
Abstract:
Insects are displaying high nutritional value, low greenhouse gas emissions, low land use requirements and high food conversion efficiency. They can contribute to the food chain and be one of many solutions to protein shortages. Currently, in North America, nutritional entomology is under-developed and the needs to better understand its benefits remain to convince large-scale producers and consumers (both for human and agricultural needs). As such, large-scale production of mealworms offers a promising alternative to replacing traditional sources of protein and fatty acids. To proceed orderly, it is required to collect more data on the nutritional values of insects such as, a) Evaluate the diets of insects to improve their dietary value; b) Test the breeding conditions to optimize yields; c) Evaluate the use of by-products and organic residues as sources of food. Among the featured technical parameters, relative humidity (RH) percentage and temperature, optimal substrates and hydration sources are critical elements, thus establishing potential benchmarks for to optimize conversion rates of protein and fatty acids. This research is to establish the combination of the most influential rearing parameters with local food residues, to correlate the findings with the nutritional value of the larvae harvested. 125 same-monthly old adults/replica are randomly selected in the mealworm breeding pool then placed to oviposit in growth chambers preset at 26°C and 65% RH. Adults are removed after 7 days. Larvae are harvested upon the apparition of the first nymphosis signs and batches, are analyzed for their nutritional values using wet chemistry analysis. The first samples analyses include total weight of both fresh and dried larvae, residual humidity, crude proteins (CP%), and crude fats (CF%). Further analyses are scheduled to include soluble proteins and fatty acids. Although they are consistent with previous published data, the preliminary results show no significant differences between treatments for any type of analysis. Nutritional properties of each substrate combination have yet allowed to discriminate the most effective residue recipe. Technical issues such as the particles’ size of the various substrate combinations and larvae screen compatibility are to be investigated since it induced a variable percentage of lost larvae upon harvesting. To address those methodological issues are key to develop a standardized efficient procedure. The aim is to provide producers with easily reproducible conditions, without incurring additional excessive expenditure on their part in terms of equipment and workforce.Keywords: entomophagy, nutritional value, rearing parameters optimization, Tenebrio molitor
Procedia PDF Downloads 111841 Children's Literature with Mathematical Dialogue for Teaching Mathematics at Elementary Level: An Exploratory First Phase about Students’ Difficulties and Teachers’ Needs in Third and Fourth Grade
Authors: Goulet Marie-Pier, Voyer Dominic, Simoneau Victoria
Abstract:
In a previous research project (2011-2019) funded by the Quebec Ministry of Education, an educational approach was developed based on the teaching and learning of place value through children's literature. Subsequently, the effect of this approach on the conceptual understanding of the concept among first graders (6-7 years old) was studied. The current project aims to create a series of children's literature to help older elementary school students (8-10 years old) in developing a conceptual understanding of complex mathematical concepts taught at their grade level rather than a more typical procedural understanding. Knowing that there are no educational material or children's books that exist to achieve our goals, four stories, accompanied by mathematical activities, will be created to support students, and their teachers, in the learning and teaching of mathematical concepts that can be challenging within their mathematic curriculum. The stories will also introduce a mathematical dialogue into the characters' discourse with the aim to address various mathematical foundations for which there are often erroneous statements among students and occasionally among teachers. In other words, the stories aim to empower students seeking a real understanding of difficult mathematical concepts, as well as teachers seeking a way to teach these difficult concepts in a way that goes beyond memorizing rules and procedures. In order to choose the concepts that will be part of the stories, it is essential to understand the current landscape regarding the main difficulties experienced by students in third and fourth grade (8-10 years old) and their teacher’s needs. From this perspective, the preliminary phase of the study, as discussed in the presentation, will provide critical insight into the mathematical concepts with which the target grade levels struggle the most. From this data, the research team will select the concepts and develop their stories in the second phase of the study. Two questions are preliminary to the implementation of our approach, namely (1) what mathematical concepts are considered the most “difficult to teach” by teachers in the third and fourth grades? and (2) according to teachers, what are the main difficulties encountered by their students in numeracy? Self-administered online questionnaires using the SimpleSondage software will be sent to all third and fourth-grade teachers in nine school service centers in the Quebec region, representing approximately 300 schools. The data that will be collected in the fall of 2022 will be used to compare the difficulties identified by the teachers with those prevalent in the scientific literature. Considering that this ensures consistency between the proposed approach and the true needs of the educational community, this preliminary phase is essential to the relevance of the rest of the project. It is also an essential first step in achieving the two ultimate goals of the research project, improving the learning of elementary school students in numeracy, and contributing to the professional development of elementary school teachers.Keywords: children’s literature, conceptual understanding, elementary school, learning and teaching, mathematics
Procedia PDF Downloads 89840 Dry Reforming of Methane Using Metal Supported and Core Shell Based Catalyst
Authors: Vinu Viswanath, Lawrence Dsouza, Ugo Ravon
Abstract:
Syngas typically and intermediary gas product has a wide range of application of producing various chemical products, such as mixed alcohols, hydrogen, ammonia, Fischer-Tropsch products methanol, ethanol, aldehydes, alcohols, etc. There are several technologies available for the syngas production. An alternative to the conventional processes an attractive route of utilizing carbon dioxide and methane in equimolar ratio to generate syngas of ratio close to one has been developed which is also termed as Dry Reforming of Methane technology. It also gives the privilege to utilize the greenhouse gases like CO2 and CH4. The dry reforming process is highly endothermic, and indeed, ΔG becomes negative if the temperature is higher than 900K and practically, the reaction occurs at 1000-1100K. At this temperature, the sintering of the metal particle is happening that deactivate the catalyst. However, by using this strategy, the methane is just partially oxidized, and some cokes deposition occurs that causing the catalyst deactivation. The current research work was focused to mitigate the main challenges of dry reforming process such coke deposition, and metal sintering at high temperature.To achieve these objectives, we employed three different strategies of catalyst development. 1) Use of bulk catalysts such as olivine and pyrochlore type materials. 2) Use of metal doped support materials, like spinel and clay type material. 3) Use of core-shell model catalyst. In this approach, a thin layer (shell) of redox metal oxide is deposited over the MgAl2O4 /Al2O3 based support material (core). For the core-shell approach, an active metal is been deposited on the surface of the shell. The shell structure formed is a doped metal oxide that can undergo reduction and oxidation reactions (redox), and the core is an alkaline earth aluminate having a high affinity towards carbon dioxide. In the case of metal-doped support catalyst, the enhanced redox properties of doped CeO2 oxide and CO2 affinity property of alkaline earth aluminates collectively helps to overcome coke formation. For all of the mentioned three strategies, a systematic screening of the metals is carried out to optimize the efficiency of the catalyst. To evaluate the performance of them, the activity and stability test were carried out under reaction conditions of temperature ranging from 650 to 850 ̊C and an operating pressure ranging from 1 to 20 bar. The result generated infers that the core-shell model catalyst showed high activity and better stable DR catalysts under atmospheric as well as high-pressure conditions. In this presentation, we will show the results related to the strategy.Keywords: carbon dioxide, dry reforming, supports, core shell catalyst
Procedia PDF Downloads 179839 Determinants of Maternal Near-Miss among Women in Public Hospital Maternity Wards in Northern Ethiopia: A Facility Based Case-Control Study
Authors: Dejene Ermias Mekango, Mussie Alemayehu, Gebremedhin Berhe Gebregergs, Araya Abrha Medhanye, Gelila Goba
Abstract:
Background: Maternal near miss (MNM) can be used as a proxy indicator of maternal mortality ratio. There is a huge gap in life time risk between Sub-Saharan Africa and developed countries. In Ethiopia, a significant number of women die each year from complications during pregnancy, childbirth and the post-partum period. Besides, a few studies have been performed on MNM, and little is known regarding determinant factors. This study aims to identify determinants of MNM among women in Tigray region, Northern Ethiopia. Methods: a case-control study in hospital found in Tigray region, Ethiopia was conducted from January 30 - March 30, 2016. The sample included 103 cases and 205 controls recruited from women seeking obstetric care at six public hospitals. Clients having a life-threatening obstetric complication including haemorrhage, hypertensive diseases of pregnancy, dystocia, infections, and anemia or clinical signs of severe anemia in women without haemorrhage were taken as cases and those with normal obstetric outcomes were considered as controls. Cases were selected based on proportional to size allocation while systematic sampling was employed for controls. Data were analyzed using SPSS version 20.0. Binary and multiple variable logistic regression (odds ratio) analyses were calculated with 95% CI. Results: The largest proportion of cases and controls was among the ages of20–29 years, accounting for37.9 %( 39) of cases and 31.7 %( 65) of controls. Roughly 90% of cases and controls were married. About two-thirds of controls and 45.6 %( 47) of cases had gestational age between 37-41 weeks. History of chronic medical conditions was reported in 55.3 %(57) of cases and 33.2%(68) of controls. Women with no formal education [AOR=3.2;95%CI:1.24, 8.12],being less than 16 years old at first pregnancy [AOR=2.5; 95%CI:1.12,5.63],induced labor[AOR=3; 95%CI:1.44, 6.17], history of Cesarean section (C-section) [AOR=4.6; 95%CI: 1.98, 7.61] or chronic medical disorder[AOR=3.5;95%CI:1.78, 6.93], and women who traveled more than 60 minutes before reaching their final place of care[AOR=2.8;95% CI: 1.19,6.35] all had higher odds of experiencing MNM. Conclusions: The Government of Ethiopia should continue its effort to address the lack of road and health facility access as well as education, which will help reduce MNM. Work should also be continued to educate women and providers about common predictors of MNM like the history of C-section, chronic illness, and teenage pregnancy. These efforts should be carried out at the facility, community, and individual levels. The targeted follow-up to women with a history of chronic disease and C-section could also be a practical way to reduce MNM.Keywords: maternal near miss, severe obstetric hemorrhage, hypertensive disorder, c-section, Tigray, Ethiopia
Procedia PDF Downloads 222838 A Multifactorial Algorithm to Automate Screening of Drug-Induced Liver Injury Cases in Clinical and Post-Marketing Settings
Authors: Osman Turkoglu, Alvin Estilo, Ritu Gupta, Liliam Pineda-Salgado, Rajesh Pandey
Abstract:
Background: Hepatotoxicity can be linked to a variety of clinical symptoms and histopathological signs, posing a great challenge in the surveillance of suspected drug-induced liver injury (DILI) cases in the safety database. Additionally, the majority of such cases are rare, idiosyncratic, highly unpredictable, and tend to demonstrate unique individual susceptibility; these qualities, in turn, lend to a pharmacovigilance monitoring process that is often tedious and time-consuming. Objective: Develop a multifactorial algorithm to assist pharmacovigilance physicians in identifying high-risk hepatotoxicity cases associated with DILI from the sponsor’s safety database (Argus). Methods: Multifactorial selection criteria were established using Structured Query Language (SQL) and the TIBCO Spotfire® visualization tool, via a combination of word fragments, wildcard strings, and mathematical constructs, based on Hy’s law criteria and pattern of injury (R-value). These criteria excluded non-eligible cases from monthly line listings mined from the Argus safety database. The capabilities and limitations of these criteria were verified by comparing a manual review of all monthly cases with system-generated monthly listings over six months. Results: On an average, over a period of six months, the algorithm accurately identified 92% of DILI cases meeting established criteria. The automated process easily compared liver enzyme elevations with baseline values, reducing the screening time to under 15 minutes as opposed to multiple hours exhausted using a cognitively laborious, manual process. Limitations of the algorithm include its inability to identify cases associated with non-standard laboratory tests, naming conventions, and/or incomplete/incorrectly entered laboratory values. Conclusions: The newly developed multifactorial algorithm proved to be extremely useful in detecting potential DILI cases, while heightening the vigilance of the drug safety department. Additionally, the application of this algorithm may be useful in identifying a potential signal for DILI in drugs not yet known to cause liver injury (e.g., drugs in the initial phases of development). This algorithm also carries the potential for universal application, due to its product-agnostic data and keyword mining features. Plans for the tool include improving it into a fully automated application, thereby completely eliminating a manual screening process.Keywords: automation, drug-induced liver injury, pharmacovigilance, post-marketing
Procedia PDF Downloads 152837 Investigation of a Technology Enabled Model of Home Care: the eShift Model of Palliative Care
Authors: L. Donelle, S. Regan, R. Booth, M. Kerr, J. McMurray, D. Fitzsimmons
Abstract:
Palliative home health care provision within the Canadian context is challenged by: (i) a shortage of registered nurses (RN) and RNs with palliative care expertise, (ii) an aging population, (iii) reliance on unpaid family caregivers to sustain home care services with limited support to conduct this ‘care work’, (iv) a model of healthcare that assumes client self-care, and (v) competing economic priorities. In response, an interprofessional team of service provider organizations, a software/technology provider, and health care providers developed and implemented a technology-enabled model of home care, the eShift model of palliative home care (eShift). The eShift model combines communication and documentation technology with non-traditional utilization of health human resources to meet patient needs for palliative care in the home. The purpose of this study was to investigate the structure, processes, and outcomes of the eShift model of care. Methodology: Guided by Donebedian’s evaluation framework for health care, this qualitative-descriptive study investigated the structure, processes, and outcomes care of the eShift model of palliative home care. Interviews and focus groups were conducted with health care providers (n= 45), decision-makers (n=13), technology providers (n=3) and family care givers (n=8). Interviews were recorded, transcribed, and a deductive analysis of transcripts was conducted. Study Findings (1) Structure: The eShift model consists of a remotely-situated RN using technology to direct care provision virtually to patients in their home. The remote RN is connected virtually to a health technician (an unregulated care provider) in the patient’s home using real-time communication. The health technician uses a smartphone modified with the eShift application and communicates with the RN who uses a computer with the eShift application/dashboard. Documentation and communication about patient observations and care activities occur in the eShift portal. The RN is typically accountable for four to six health technicians and patients over an 8-hour shift. The technology provider was identified as an important member of the healthcare team. Other members of the team include family members, care coordinators, nurse practitioners, physicians, and allied health. (2) Processes: Conventionally, patient needs are the focus of care; however within eShift, the patient and the family caregiver were the focus of care. Enhanced medication administration was seen as one of the most important processes, and family caregivers reported high satisfaction with the care provided. There was perceived enhanced teamwork among health care providers. (3) Outcomes: Patients were able to die at home. The eShift model enabled consistency and continuity of care, and effective management of patient symptoms and caregiver respite. Conclusion: More than a technology solution, the eShift model of care was viewed as transforming home care practice and an innovative way to resolve the shortage of palliative care nurses within home care.Keywords: palliative home care, health information technology, patient-centred care, interprofessional health care team
Procedia PDF Downloads 418836 Process Safety Management Digitalization via SHEQTool based on Occupational Safety and Health Administration and Center for Chemical Process Safety, a Case Study in Petrochemical Companies
Authors: Saeed Nazari, Masoom Nazari, Ali Hejazi, Siamak Sanoobari Ghazi Jahani, Mohammad Dehghani, Javad Vakili
Abstract:
More than ever, digitization is an imperative for businesses to keep their competitive advantages, foster innovation and reduce paperwork. To design and successfully implement digital transformation initiatives within process safety management system, employees need to be equipped with the right tool, frameworks, and best practices. we developed a unique full stack application so-called SHEQTool which is entirely dynamic based on our extensive expertise, experience, and client feedback to help business processes particularly operations safety management. We use our best knowledge and scientific methodologies published by CCPS and OSHA Guidelines to streamline operations and integrated them into task management within Petrochemical Companies. We digitalize their main process safety management system elements and their sub elements such as hazard identification and risk management, training and communication, inspection and audit, critical changes management, contractor management, permit to work, pre-start-up safety review, incident reporting and investigation, emergency response plan, personal protective equipment, occupational health, and action management in a fully customizable manner with no programming needs for users. We review the feedback from main actors within petrochemical plant which highlights improving their business performance and productivity as well as keep tracking their functions’ key performance indicators (KPIs) because it; 1) saves time, resources, and costs of all paperwork on our businesses (by Digitalization); 2) reduces errors and improve performance within management system by covering most of daily software needs of the organization and reduce complexity and associated costs of numerous tools and their required training (One Tool Approach); 3) focuses on management systems and integrate functions and put them into traceable task management (RASCI and Flowcharting); 4) helps the entire enterprise be resilient to any change of your processes, technologies, assets with minimum costs (through Organizational Resilience); 5) reduces significantly incidents and errors via world class safety management programs and elements (by Simplification); 6) gives the companies a systematic, traceable, risk based, process based, and science based integrated management system (via proper Methodologies); 7) helps business processes complies with ISO 9001, ISO 14001, ISO 45001, ISO 31000, best practices as well as legal regulations by PDCA approach (Compliance).Keywords: process, safety, digitalization, management, risk, incident, SHEQTool, OSHA, CCPS
Procedia PDF Downloads 66835 Baricitinib Lipid-based Nanosystems as a Topical Alternative for Atopic Dermatitis Treatment
Authors: N. Garrós, P. Bustos, N. Beirampour, R. Mohammadi, M. Mallandrich, A.C. Calpena, H. Colom
Abstract:
Atopic dermatitis (AD) is a persistent skin condition characterized by chronic inflammation caused by an autoimmune response. It is a prevalent clinical issue that requires continual treatment to enhance the patient's quality of life. Systemic therapy often involves the use of glucocorticoids or immunosuppressants to manage symptoms. Our objective was to create and assess topical liposomal formulations containing Baricitinib (BNB), a reversible inhibitor of Janus-associated kinase (JAK), which is involved in various immune responses. These formulations were intended to address flare-ups and improve treatment outcomes for AD. We created three distinct liposomal formulations by combining different amounts of 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC), cholesterol (CHOL), and ceramide (CER): (i) pure POPC, (ii) POPC mixed with CHOL (at a ratio of 8:2, mol/mol), and (iii) POPC mixed with CHOL and CER (at a ratio of 3.6:2.4:4.0 mol/mol/mol). We conducted various tests to determine the formulations' skin tolerance, irritancy capacity, and their ability to cause erythema and edema on altered skin. We also assessed the transepidermal water loss (TEWL) and skin hydration of rabbits to evaluate the efficacy of the formulations. Histological analysis, the HET-CAM test, and the modified Draize test were all used in the evaluation process. The histological analysis revealed that liposome POPC and POPC:CHOL avoided any damage to the tissues structures. The HET-CAM test showed no irritation effect caused by any of the three liposomes, and the modified Draize test showed a good Draize score for erythema and edema. Liposome POPC effectively counteracted the impact of xylol on the skin, and no erythema or edema was observed during the study. TEWL values were constant for all the liposomes with similar values to the negative control (within the range 8 - 15 g/h·m2, which means a healthy value for rabbits), whereas the positive control showed a significant increase. The skin hydration values were constant and followed the trend of the negative control, while the positive control showed a steady increase during the tolerance study. In conclusion, the developed formulations containing BNB exhibited no harmful or irritating effects, they did not demonstrate any irritant potential in the HET-CAM test and liposomes POPC and POPC:CHOL did not cause any structural alteration according to the histological analysis. These positive findings suggest that additional research is necessary to evaluate the efficacy of these liposomal formulations in animal models of the disease, including mutant animals. Furthermore, before proceeding to clinical trials, biochemical investigations should be conducted to better understand the mechanisms of action involved in these formulations.Keywords: baricitinib, HET-CAM test, histological study, JAK inhibitor, liposomes, modified draize test
Procedia PDF Downloads 92