Search results for: shared memory parallel programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4004

Search results for: shared memory parallel programming

44 Case Report: A Case of Confusion with Review of Sedative-Hypnotic Alprazolam Use

Authors: Agnes Simone

Abstract:

A 52-year-old male with unknown psychiatric and medical history was brought to the Psychiatric Emergency Room by ambulance directly from jail. He had been detained for three weeks for possession of a firearm while intoxicated. On initial evaluation, the patient was unable to provide a reliable history. He presented with odd jerking movements of his extremities and catatonic features, including mutism and stupor. His vital signs were stable. Patient was transferred to the medical emergency department for work-up of altered mental status. Due to suspicion for opioid overdose, the patient was given naloxone (Narcan) with no improvement. Laboratory work-up included complete blood count, comprehensive metabolic panel, thyroid stimulating hormone, vitamin B12, folate, magnesium, rapid plasma reagin, HIV, blood alcohol level, aspirin, and Tylenol blood levels, urine drug screen, and urinalysis, which were all negative. CT head and chest X-Ray were also negative. With this negative work-up, the medical team concluded there was no organic etiology and requested inpatient psychiatric admission. Upon re-evaluation by psychiatry, it was evident that the patient continued to have an altered mental status. Of note, the medical team did not include substance withdrawal in the differential diagnosis due to stable vital signs and a negative urine drug screen. The psychiatry team decided to check California's prescription drug monitoring program (CURES) and discovered that the patient was prescribed benzodiazepine alprazolam (Xanax) 2mg BID, a sedative-hypnotic, and hydrocodone/acetaminophen 10mg/325mg (Norco) QID, an opioid. After a thorough chart review, his daughter's contact information was found, and she confirmed his benzodiazepine and opioid use, with recent escalation and misuse. It was determined that the patient was experiencing alprazolam withdrawal, given this collateral information, his current symptoms, negative urine drug screen, and recent abrupt discontinuation of medications while incarcerated. After admission to the medical unit and two doses of alprazolam 2mg, the patient's mental status, alertness, and orientation improved, but he had no memory of the events that led to his hospitalization. He was discharged with a limited supply of alprazolam and a close follow-up to arrange a taper. Accompanying this case report, a qualitative review of presentations with alprazolam withdrawal was completed. This case and the review highlights: (1) Alprazolam withdrawal can occur at low doses and within just one week of use. (2) Alprazolam withdrawal can present without any vital sign instability. (3) Alprazolam withdrawal does not respond to short-acting benzodiazepines but does respond to certain long-acting benzodiazepines due to its unique chemical structure. (4) Alprazolam withdrawal is distinct from and more severe than other benzodiazepine withdrawals. This case highlights (1) the importance of physician utilization of drug-monitoring programs. This case, in particular, relied on California's drug monitoring program. (2) The importance of obtaining collateral information, especially in cases in which the patient is unable to provide a reliable history. (3) The importance of including substance intoxication and withdrawal in the differential diagnosis even when there is a negative urine drug screen. Toxidrome of withdrawal can be delayed. (4) The importance of discussing addiction and withdrawal risks of medications with patients.

Keywords: addiction risk of benzodiazepines, alprazolam withdrawal, altered mental status, benzodiazepines, drug monitoring programs, sedative-hypnotics, substance use disorder

Procedia PDF Downloads 96
43 Even When the Passive Resistance Is Obligatory: Civil Intellectuals’ Solidarity Activism in Tea Workers Movement

Authors: Moshreka Aditi Huq

Abstract:

This study shows how a progressive portion of civil intellectuals in Bangladesh contributed as the solidarity activist entities in a movement of tea workers that became the symbol of their unique moral struggle. Their passive yet sharp way of resistance, with the integration of mass tea workers of a tea estate, got demonstrated against certain private companies and government officials who approached to establish a special economic zone inside the tea garden without offering any compensation and rehabilitation for poor tea workers. Due to massive protests and rebellion, the authorized entrepreneurs had to step back and called off the project immediately. The extraordinary features of this movement generated itself from the deep core social need of indigenous tea workers who are still imprisoned in the colonial cage. Following an anthropological and ethnographic perspective, this study adopted the main three techniques of intensive interview, focus group discussion, and laborious observation, to extract empirical data. The intensive interviews were undertaken informally using a mostly conversational approach. Focus group discussions were piloted among various representative groups where observations prevailed as part of the regular documentation process. These were conducted among civil intellectual entities, tea workers, tea estate authorities, civil service authorities, and business officials to obtain a holistic view of the situation. The fieldwork was executed in capital Dhaka city, along with northern areas like Chandpur-Begumkhan Tea Estate of Chunarughat Upazilla and Habiganj city of Habiganj District of Bangladesh. Correspondingly, secondary data were accessed through books, scholarly papers, archives, newspapers, reports, leaflets, posters, writing blog, and electronic pages of social media. The study results find that: (1) civil intellectuals opposed state-sponsored business impositions by producing counter-discourse and struggled against state hegemony through the phases of the movement; (2) instead of having the active physical resistance, civil intellectuals’ strength was preferably in passive form which was portrayed through their intellectual labor; (3) the combined movement of tea workers and civil intellectuals reflected on social security of ethnic worker communities that contrasts state’s pseudo-development motives which ultimately supports offensive and oppressive neoliberal growths of economy; (4) civil intellectuals are revealed as having certain functional limitations in the process of movement organization as well as resource mobilization; (5) in specific contexts, the genuine need of protest by indigenous subaltern can overshadow intellectual elitism and helps to raise the voices of ‘subjugated knowledge’. This study is quite likely to represent two sets of apparent protagonist entities in the discussion of social injustice and oppressive development intervention. On the one, hand it may help us to find the basic functional characteristics of civil intellectuals in Bangladesh when they are in a passive mode of resistance in social movement issues. On the other hand, it represents the community ownership and inherent protest tendencies of indigenous workers when they feel threatened and insecure. The study seems to have the potential to understand the conditions of ‘subjugated knowledge’ of subalterns. Furthermore, being the memory and narratives, these ‘activism mechanisms’ of social entities broadens the path to understand ‘power’ and ‘resistance’ in more fascinating ways.

Keywords: civil intellectuals, resistance, subjugated knowledge, indigenous

Procedia PDF Downloads 105
42 Examining the Current Divisive State of American Political Discourse through the Lens of Peirce's Triadic Logical Structure and Pragmatist Metaphysics

Authors: Nathan Garcia

Abstract:

The polarizing dialogue of contemporary political America results from core philosophical differences. But these differences are beyond ideological and reach metaphysical distinction. Good intellectual historians have theorized that fundamental concepts such as freedom, God, and nature have been sterilized of their intellectual vigor. They are partially correct. 19th-century pragmatist Charles Sanders Peirce offers a penetrating philosophy which can yield greater insight into the contemporary political divide. Peirce argues that metaphysical and ethical issues are derivative of operational logic. His triadic logical structure and ensuing metaphysical principles constructed therefrom is contemporaneously applicable for three reasons. First, Peirce’s logic aptly scrutinizes the logical processes of liberal and conservative mindsets. Each group arrives at a cosmological root metaphor (abduction), resulting in a contemporary assessment (deduction), ultimately prompting attempts to verify the original abduction (induction). Peirce’s system demonstrates that liberal citizens develop a cosmological root metaphor in the concept of fairness (abduction), resulting in a contemporary assessment of, for example, underrepresented communities being unfairly preyed upon (deduction), thereby inciting anger toward traditional socio-political structures suspected of purposefully destabilizing minority communities (induction). Similarly, conservative citizens develop a cosmological root metaphor in the concept of freedom (abduction), resulting in a contemporary assessment of, for example, liberal citizens advocating an expansion of governmental powers (deduction), thereby inciting anger towards liberal communities suspected of attacking freedoms of ordinary Americans in a bid to empower their interests through the government (induction). The value of this triadic assessment is the categorization of distinct types of inferential logic by their purpose and boundaries. Only deductive claims can be concretely proven, while abductive claims are merely preliminary hypotheses, and inductive claims are accountable to interdisciplinary oversight. Liberals and conservative logical processes preclude constructive dialogue because of (a) an unshared abductive framework, and (b) misunderstanding the rules and responsibilities of their types of claims. Second, Peircean metaphysical principles offer a greater summary of the contemporaneously divisive political climate. His insights can weed through the partisan theorizing to unravel the underlying philosophical problems. Corrosive nominalistic and essentialistic presuppositions weaken the ability to share experiences and communicate effectively, both requisite for any promising constructive dialogue. Peirce’s pragmatist system can expose and evade fallacious thinking in pursuit of a refreshing alternative framework. Finally, Peirce’s metaphysical foundation enables a logically coherent, scientifically informed orthopraxis well-suited for American dialogue. His logical structure necessitates radically different anthropology conducive to shared experiences and dialogue within a dynamic, cultural continuum. Pierce’s fallibilism and sensitivity to religious sentiment successfully navigate between liberal and conservative values. In sum, he provides a normative paradigm for intranational dialogue that privileges individual experience and values morally defensible notions of freedom, God, and nature. Utilizing Peirce’s thought will yield fruitful analysis and offers a promising philosophical alternative for framing and engaging in contemporary American political discourse.

Keywords: Charles s. Peirce, american politics, logic, pragmatism

Procedia PDF Downloads 88
41 Supports for Student Learning Program: Exploring the Educational Terrain of Newcomer and Refugee Students in Canada

Authors: Edward Shizha, Edward Makwarimba

Abstract:

This literature review explores current research on the educational strengths and barriers of newcomer and refugee youth in Canada. Canada’s shift in immigration policy in the past three decades, from Europe to Asian and African countries as source continents of recent immigrants to Canada, has tremendously increased the ethnic, linguistic, cultural and religious diversity of the population, including that of students in its education system. Over 18% of the country’s population was born in another country, of which 70% are visible minorities. There has been an increase in admitted immigrants and refugees, with a total of 226,203 between July 2020 and June 2021. Newcomer parents and their children in all major destination countries, including Canada, face tremendous challenges, including racism and discrimination, lack of English language skills, poverty, income inequality, unemployment, and underemployment. They face additional challenges, including discrimination against those who cannot speak the official languages, English or French. The severity of the challenges depends on several intersectional factors, including immigrant status (asylum seeker, refugee, or immigrant), age, gender, level of education and others. Through the lens of intersectionality as an explanatory perspective, this literature review examines the educational attainment and outcomes of newcomer and refugee youth in Canada in order to understand their educational needs, educational barriers and strengths. Newcomer youths’ experiences are shaped by numerous intersectional and interconnected sociocultural, sociopolitical, and socioeconomic factors—including gender, migration status, racialized status, ethnicity, socioeconomic class, sexual minority status, age, race—that produce and perpetuate their disadvantage. According to research, immigrants and refugees from visible minority ethnic backgrounds experience exclusions more than newcomers from other backgrounds and groups from the mainstream population. For many immigrant parents, migration provides financial and educational opportunities for their children. Yet, when attending school, newcomer and refugee youth face unique challenges related to racism and discrimination, negative attitudes and stereotypes from teachers and other school authorities, language learning and proficiency, differing levels of acculturation, and different cultural views of the role of parents in relation to teachers and school, and unfamiliarity with the social or school context in Canada. Recognizing discrepancies in educational attainment of newcomer and refugee youth based on their race and immigrant status, the paper develops insights into existing research and data gaps related to educational strengths and challenges for visible minority newcomer youth in Canada. The paper concludes that the educational successes or failures of the newcomer and refugee youth and their settlement and integration into the school system in Canada may depend on where their families settle, the attitudes of the host community and the school officials (teachers, guidance counsellors and school administrators) after-school support programs and their own set of coping mechanisms. Conceivably a unique approach to after-school programming should provide learning supports and opportunities that consider newcomer and refugee youth’s needs, experiences, backgrounds and circumstances. This support is likely to translate into significant academic and psychological well-being of newcomer students.

Keywords: deficit discourse, discrimination, educational outcomes, newcomer and refugee youth, racism, strength-based approach, whiteness

Procedia PDF Downloads 43
40 Elevated Systemic Oxidative-Nitrosative Stress and Cerebrovascular Function in Professional Rugby Union Players: The Link to Impaired Cognition

Authors: Tom S. Owens, Tom A. Calverley, Benjamin S. Stacey, Christopher J. Marley, George Rose, Lewis Fall, Gareth L. Jones, Priscilla Williams, John P. R. Williams, Martin Steggall, Damian M. Bailey

Abstract:

Introduction and aims: Sports-related concussion (SRC) represents a significant and growing public health concern in rugby union, yet remains one of the least understood injuries facing the health community today. Alongside increasing SRC incidence rates, there is concern that prior recurrent concussion may contribute to long-term neurologic sequelae in later-life. This may be due to an accelerated decline in cerebral perfusion, a major risk factor for neurocognitive decline and neurodegeneration, though the underlying mechanisms remain to be established. The present study hypothesised that recurrent concussion in current professional rugby union players would result in elevated systemic oxidative-nitrosative stress, reflected by a free radical-mediated reduction in nitric oxide (NO) bioavailability and impaired cerebrovascular and cognitive function. Methodology: A longitudinal study design was adopted across the 2017-2018 rugby union season. Ethical approval was obtained from the University of South Wales Ethics Committee. Data collection is ongoing, and therefore the current report documents result from the pre-season and first half of the in-season data collection. Participants were initially divided into two subgroups; 23 professional rugby union players (aged 26 ± 5 years) and 22 non-concussed controls (27 ± 8 years). Pre-season measurements were performed for cerebrovascular function (Doppler ultrasound of middle cerebral artery velocity (MCAv) in response to hypocapnia/normocapnia/hypercapnia), cephalic venous concentrations of the ascorbate radical (A•-, electron paramagnetic resonance spectroscopy), NO (ozone-based chemiluminescence) and cognition (neuropsychometric tests). Notational analysis was performed to assess contact in the rugby group throughout each competitive game. Results: 1001 tackles and 62 injuries, including three concussions were observed across the first half of the season. However, no associations were apparent between number of tackles and any injury type (P > 0.05). The rugby group expressed greater oxidative stress as indicated by increased A•- (P < 0.05 vs. control) and a subsequent decrease in NO bioavailability (P < 0.05 vs. control). The rugby group performed worse in the Ray Auditory Verbal Learning Test B (RAVLT-B, learning, and memory) and the Grooved Pegboard test using both the dominant and non-dominant hands (visuomotor coordination, P < 0.05 vs. control). There were no between-group differences in cerebral perfusion at baseline (MCAv: 54 ± 13 vs. 59 ± 12, P > 0.05). Likewise, no between-group differences in CVRCO2Hypo (2.58 ± 1.01 vs. 2.58 ± 0.75, P > 0.05) or CVRCO2Hyper (2.69 ± 1.07 vs. 3.35 ± 1.28, P > 0.05) were observed. Conclusion: The present study identified that the rugby union players are characterized by impaired cognitive function subsequent to elevated systemic-oxidative-nitrosative stress. However, this appears to be independent of any functional impairment in cerebrovascular function. Given the potential long-term trajectory towards accelerated cognitive decline in populations exposed to SRC, prophylaxis to increase NO bioavailability warrants consideration.

Keywords: cognition, concussion, mild traumatic brain injury, rugby

Procedia PDF Downloads 146
39 Application of Satellite Remote Sensing in Support of Water Exploration in the Arab Region

Authors: Eman Ghoneim

Abstract:

The Arabian deserts include some of the driest areas on Earth. Yet, its landforms reserved a record of past wet climates. During humid phases, the desert was green and contained permanent rivers, inland deltas and lakes. Some of their water would have seeped and replenished the groundwater aquifers. When the wet periods came to an end, several thousand years ago, the entire region transformed into an extended band of desert and its original fluvial surface was totally covered by windblown sand. In this work, radar and thermal infrared images were used to reveal numerous hidden surface/subsurface features. Radar long wavelength has the unique ability to penetrate surface dry sands and uncover buried subsurface terrain. Thermal infrared also proven to be capable of spotting cooler moist areas particularly in hot dry surfaces. Integrating Radarsat images and GIS revealed several previously unknown paleoriver and lake basins in the region. One of these systems, known as the Kufrah, is the largest yet identified river basin in the Eastern Sahara. This river basin, which straddles the border between Egypt and Libya, flowed north parallel to the adjacent Nile River with an extensive drainage area of 235,500 km2 and massive valley width of 30 km in some parts. This river was most probably served as a spillway for an overflow from Megalake Chad to the Mediterranean Sea and, thus, may have acted as a natural water corridor used by human ancestors to migrate northward across the Sahara. The Gilf-Kebir is another large paleoriver system located just east of Kufrah and emanates from the Gilf Plateau in Egypt. Both river systems terminate with vast inland deltas at the southern margin of the Great Sand Sea. The trends of their distributary channels indicate that both rivers drained to a topographic depression that was periodically occupied by a massive lake. During dry climates, the lake dried up and roofed by sand deposits, which is today forming the Great Sand Sea. The enormity of the lake basin provides explanation as to why continuous extraction of groundwater in this area is possible. A similar lake basin, delimited by former shorelines, was detected by radar space data just across the border of Sudan. This lake, called the Northern Darfur Megalake, has a massive size of 30,750 km2. These former lakes and rivers could potentially hold vast reservoirs of groundwater, oil and natural gas at depth. Similar to radar data, thermal infrared images were proven to be useful in detecting potential locations of subsurface water accumulation in desert regions. Analysis of both Aster and daily MODIS thermal channels reveal several subsurface cool moist patches in the sandy desert of the Arabian Peninsula. Analysis indicated that such evaporative cooling anomalies were resulted from the subsurface transmission of the Monsoonal rainfall from the mountains to the adjacent plain. Drilling a number of wells in several locations proved the presence of productive water aquifers confirming the validity of the used data and the adopted approaches for water exploration in dry regions.

Keywords: radarsat, SRTM, MODIS, thermal infrared, near-surface water, ancient rivers, desert, Sahara, Arabian peninsula

Procedia PDF Downloads 223
38 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 199
37 Implementation of Building Information Modelling to Monitor, Assess, and Control the Indoor Environmental Quality of Higher Education Buildings

Authors: Mukhtar Maigari

Abstract:

The landscape of Higher Education (HE) institutions, especially following the CVID-19 pandemic, necessitates advanced approaches to manage Indoor Environmental Quality (IEQ) which is crucial for the comfort, health, and productivity of students and staff. This study investigates the application of Building Information Modelling (BIM) as a multifaceted tool for monitoring, assessing, and controlling IEQ in HE buildings aiming to bridge the gap between traditional management practices and the innovative capabilities of BIM. Central to the study is a comprehensive literature review, which lays the foundation by examining current knowledge and technological advancements in both IEQ and BIM. This review sets the stage for a deeper investigation into the practical application of BIM in IEQ management. The methodology consists of Post-Occupancy Evaluation (POE) which encompasses physical monitoring, questionnaire surveys, and interviews under the umbrella of case studies. The physical data collection focuses on vital IEQ parameters such as temperature, humidity, CO2 levels etc, conducted by using different equipment including dataloggers to ensure accurate data. Complementing this, questionnaire surveys gather perceptions and satisfaction levels from students, providing valuable insights into the subjective aspects of IEQ. The interview component, targeting facilities management teams, offers an in-depth perspective on IEQ management challenges and strategies. The research delves deeper into the development of a conceptual BIM-based framework, informed by the insight findings from case studies and empirical data. This framework is designed to demonstrate the critical functions necessary for effective IEQ monitoring, assessment, control and automation with real time data handling capabilities. This BIM-based framework leads to the developing and testing a BIM-based prototype tool. This prototype leverages on software such as Autodesk Revit with its visual programming tool i.e., Dynamo and an Arduino-based sensor network thereby allowing for real-time flow of IEQ data for monitoring, control and even automation. By harnessing the capabilities of BIM technology, the study presents a forward-thinking approach that aligns with current sustainability and wellness goals, particularly vital in the post-COVID-19 era. The integration of BIM in IEQ management promises not only to enhance the health, comfort, and energy efficiency of educational environments but also to transform them into more conducive spaces for teaching and learning. Furthermore, this research could influence the future of HE buildings by prompting universities and government bodies to revaluate and improve teaching and learning environments. It demonstrates how the synergy between IEQ and BIM can empower stakeholders to monitor IEQ conditions more effectively and make informed decisions in real-time. Moreover, the developed framework has broader applications as well; it can serve as a tool for other sustainability assessments, like energy analysis in HE buildings, leveraging measured data synchronized with the BIM model. In conclusion, this study bridges the gap between theoretical research and real-world application by practicalizing how advanced technologies like BIM can be effectively integrated to enhance environmental quality in educational institutions. It portrays the potential of integrating advanced technologies like BIM in the pursuit of improved environmental conditions in educational institutions.

Keywords: BIM, POE, IEQ, HE-buildings

Procedia PDF Downloads 28
36 Phenotype and Psychometric Characterization of Phelan-Mcdermid Syndrome Patients

Authors: C. Bel, J. Nevado, F. Ciceri, M. Ropacki, T. Hoffmann, P. Lapunzina, C. Buesa

Abstract:

Background: The Phelan-McDermid syndrome (PMS) is a genetic disorder caused by the deletion of the terminal region of chromosome 22 or mutation of the SHANK3 gene. Shank3 disruption in mice leads to dysfunction of synaptic transmission, which can be restored by epigenetic regulation with both Lysine Specific Demethylase 1 (LSD1) inhibitors. PMS subjects result in a variable degree of intellectual disability, delay or absence of speech, autistic spectrum disorders symptoms, low muscle tone, motor delays and epilepsy. Vafidemstat is an LSD1 inhibitor in Phase II clinical development with a well-established and favorable safety profile, and data supporting the restoration of memory and cognition defects as well as reduction of agitation and aggression in several animal models and clinical studies. Therefore, vafidemstat has the potential to become a first-in-class precision medicine approach to treat PMS patients. Aims: The goal of this research is to perform an observational trial to psychometrically characterize individuals carrying deletions in SHANK3 and build a foundation for subsequent precision psychiatry clinical trials with vafidemstat. Methodology: This study is characterizing the clinical profile of 20 to 40 subjects, > 16-year-old, with genotypically confirmed PMS diagnosis. Subjects will complete a battery of neuropsychological scales, including the Repetitive Behavior Questionnaire (RBQ), Vineland Adaptive Behavior Scales, Escala de Observación para el Diagnostico del Autismo (Autism Diagnostic Observational Scale) (ADOS)-2, the Battelle Developmental Inventory and the Behavior Problems Inventory (BPI). Results: By March 2021, 19 patients have been enrolled. Unsupervised hierarchical clustering of the results obtained so far identifies 3 groups of patients, characterized by different profiles of cognitive and behavioral scores. The first cluster is characterized by low Battelle age, high ADOS and low Vineland, RBQ and BPI scores. Low Vineland, RBQ and BPI scores are also detected in the second cluster, which in contrast has high Battelle age and low ADOS scores. The third cluster is somewhat in the middle for the Battelle, Vineland and ADOS scores while displaying the highest levels of aggression (high BPI) and repeated behaviors (high RBQ). In line with the observation that female patients are generally affected by milder forms of autistic symptoms, no male patients are present in the second cluster. Dividing the results by gender highlights that male patients in the third cluster are characterized by a higher frequency of aggression, whereas female patients from the same cluster display a tendency toward higher repetitive behavior. Finally, statistically significant differences in deletion sizes are detected comparing the three clusters (also after correcting for gender), and deletion size appears to be positively correlated with ADOS and negatively correlated with Vineland A and C scores. No correlation is detected between deletion size and the BPI and RBQ scores. Conclusions: Precision medicine may open a new way to understand and treat Central Nervous System disorders. Epigenetic dysregulation has been proposed to be an important mechanism in the pathogenesis of schizophrenia and autism. Vafidemstat holds exciting therapeutic potential in PMS, and this study will provide data regarding the optimal endpoints for a future clinical study to explore vafidemstat ability to treat shank3-associated psychiatric disorders.

Keywords: autism, epigenetics, LSD1, personalized medicine

Procedia PDF Downloads 141
35 Renewable Energy Micro-Grid Control Using Microcontroller in LabVIEW

Authors: Meena Agrawal, Chaitanya P. Agrawal

Abstract:

The power systems are transforming and becoming smarter with innovations in technologies to enable embark simultaneously upon the sustainable energy needs, rising environmental concerns, economic benefits and quality requirements. The advantages provided by inter-connection of renewable energy resources are becoming more viable and dependable with the smart controlling technologies. The limitation of most renewable resources have their diversity and intermittency causing problems in power quality, grid stability, reliability, security etc. is being cured by these efforts. A necessitate of optimal energy management by intelligent Micro-Grids at the distribution end of the power system has been accredited to accommodate sustainable renewable Distributed Energy Resources on large scale across the power grid. All over the world Smart Grids are emerging now as foremost concern infrastructure upgrade programs. The hardware setup includes NI cRIO 9022, Compact Reconfigurable Input Output microcontroller board connected to the PC on a LAN router with three hardware modules. The Real-Time Embedded Controller is reconfigurable controller device consisting of an embedded real-time processor controller for communication and processing, a reconfigurable chassis housing the user-programmable FPGA, Eight hot-swappable I/O modules, and graphical LabVIEW system design software. It has been employed for signal analysis, controls and acquisition and logging of the renewable sources with the LabVIEW Real-Time applications. The employed cRIO chassis controls the timing for the module and handles communication with the PC over the USB, Ethernet, or 802.11 Wi-Fi buses. It combines modular I/O, real-time processing, and NI LabVIEW programmable. In the presented setup, the Analog Input Module NI 9205 five channels have been used for input analog voltage signals from renewable energy sources and NI 9227 four channels have been used for input analog current signals of the renewable sources. For switching actions based on the programming logic developed in software, a module having Electromechanical Relays (single-pole single throw) with 4-Channels, electrically isolated and LED indicating the state of that channel have been used for isolating the renewable Sources on fault occurrence, which is decided by the logic in the program. The module for Ethernet based Data Acquisition Interface ENET 9163 Ethernet Carrier, which is connected on the LAN Router for data acquisition from a remote source over Ethernet also has the module NI 9229 installed. The LabVIEW platform has been employed for efficient data acquisition, monitoring and control. Control logic utilized in program for operation of the hardware switching Related to Fault Relays has been portrayed as a flowchart. A communication system has been successfully developed amongst the sources and loads connected on different computers using Hypertext transfer protocol, HTTP or Ethernet Local Stacked area Network TCP/IP protocol. There are two main I/O interfacing clients controlling the operation of the switching control of the renewable energy sources over internet or intranet. The paper presents experimental results of the briefed setup for intelligent control of the micro-grid for renewable energy sources, besides the control of Micro-Grid with data acquisition and control hardware based on a microcontroller with visual program developed in LabVIEW.

Keywords: data acquisition and control, LabVIEW, microcontroller cRIO, Smart Micro-Grid

Procedia PDF Downloads 301
34 Charcoal Traditional Production in Portugal: Contribution to the Quantification of Air Pollutant Emissions

Authors: Cátia Gonçalves, Teresa Nunes, Inês Pina, Ana Vicente, C. Alves, Felix Charvet, Daniel Neves, A. Matos

Abstract:

The production of charcoal relies on rudimentary technologies using traditional brick kilns. Charcoal is produced under pyrolysis conditions: breaking down the chemical structure of biomass under high temperature in the absence of air. The amount of the pyrolysis products (charcoal, pyroligneous extract, and flue gas) depends on various parameters, including temperature, time, pressure, kiln design, and wood characteristics like the moisture content. This activity is recognized for its inefficiency and high pollution levels, but it is poorly characterized. This activity is widely distributed and is a vital economic activity in certain regions of Portugal, playing a relevant role in the management of woody residues. The location of the units establishes the biomass used for charcoal production. The Portalegre district, in the Alto Alentejo region (Portugal), is a good example, essentially with rural characteristics, with a predominant farming, agricultural, and forestry profile, and with a significant charcoal production activity. In this district, a recent inventory identifies almost 50 charcoal production units, equivalent to more than 450 kilns, of which 80% appear to be in operation. A field campaign was designed with the objective of determining the composition of the emissions released during a charcoal production cycle. A total of 30 samples of particulate matter and 20 gas samples in Tedlar bags were collected. Particulate and gas samplings were performed in parallel, 2 in the morning and 2 in the afternoon, alternating the inlet heads (PM₁₀ and PM₂.₅), in the particulate sampler. The gas and particulate samples were collected in the plume as close as the emission chimney point. The biomass (dry basis) used in the carbonization process was a mixture of cork oak (77 wt.%), holm oak (7 wt.%), stumps (11 wt.%), and charred wood (5 wt.%) from previous carbonization processes. A cylindrical batch kiln (80 m³) with 4.5 m diameter and 5 m of height was used in this study. The composition of the gases was determined by gas chromatography, while the particulate samples (PM₁₀, PM₂.₅) were subjected to different analytical techniques (thermo-optical transmission technique, ion chromatography, HPAE-PAD, and GC-MS after solvent extraction) after prior gravimetric determination, to study their organic and inorganic constituents. The charcoal production cycle presents widely varying operating conditions, which will be reflected in the composition of gases and particles produced and emitted throughout the process. The concentration of PM₁₀ and PM₂.₅ in the plume was calculated, ranging between 0.003 and 0.293 g m⁻³, and 0.004 and 0.292 g m⁻³, respectively. Total carbon, inorganic ions, and sugars account, in average, for PM10 and PM₂.₅, 65 % and 56 %, 2.8 % and 2.3 %, 1.27 %, and 1.21 %, respectively. The organic fraction studied until now includes more than 30 aliphatic compounds and 20 PAHs. The emission factors of particulate matter to produce charcoal in the traditional kiln were 33 g/kg (wooddb) and 27 g/kg (wooddb) for PM₁₀ and PM₂.₅, respectively. With the data obtained in this study, it is possible to fill the lack of information about the environmental impact of the traditional charcoal production in Portugal. Acknowledgment: Authors thanks to FCT – Portuguese Science Foundation, I.P. and to Ministry of Science, Technology and Higher Education of Portugal for financial support within the scope of the project CHARCLEAN (PCIF/GVB/0179/2017) and CESAM (UIDP/50017/2020 + UIDB/50017/2020).

Keywords: brick kilns, charcoal, emission factors, PAHs, total carbon

Procedia PDF Downloads 110
33 Trajectory Optimization for Autonomous Deep Space Missions

Authors: Anne Schattel, Mitja Echim, Christof Büskens

Abstract:

Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.

Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.

Procedia PDF Downloads 386
32 Understanding the Impact of Resilience Training on Cognitive Performance in Military Personnel

Authors: Haji Mohammad Zulfan Farhi Bin Haji Sulaini, Mohammad Azeezudde’en Bin Mohd Ismaon

Abstract:

The demands placed on military athletes extend beyond physical prowess to encompass cognitive resilience in high-stress environments. This study investigates the effects of resilience training on the cognitive performance of military athletes, shedding light on the potential benefits and implications for optimizing their overall readiness. In a rapidly evolving global landscape, armed forces worldwide are recognizing the importance of cognitive resilience alongside physical fitness. The study employs a mixed-methods approach, incorporating quantitative cognitive assessments and qualitative data from military athletes undergoing resilience training programs. Cognitive performance is evaluated through a battery of tests, including measures of memory, attention, decision-making, and reaction time. The participants, drawn from various branches of the military, are divided into experimental and control groups. The experimental group undergoes a comprehensive resilience training program, while the control group receives traditional physical training without a specific focus on resilience. The initial findings indicate a substantial improvement in cognitive performance among military athletes who have undergone resilience training. These improvements are particularly evident in domains such as attention and decision-making. The experimental group demonstrated enhanced situational awareness, quicker problem-solving abilities, and increased adaptability in high-stress scenarios. These results suggest that resilience training not only bolsters mental toughness but also positively impacts cognitive skills critical to military operations. In addition to quantitative assessments, qualitative data is collected through interviews and surveys to gain insights into the subjective experiences of military athletes. Preliminary analysis of these narratives reveals that participants in the resilience training program report higher levels of self-confidence, emotional regulation, and an improved ability to manage stress. These psychological attributes contribute to their enhanced cognitive performance and overall readiness. Moreover, this study explores the potential long-term benefits of resilience training. By tracking participants over an extended period, we aim to assess the durability of cognitive improvements and their effects on overall mission success. Early results suggest that resilience training may serve as a protective factor against the detrimental effects of prolonged exposure to stressors, potentially reducing the risk of burnout and psychological trauma among military athletes. This research has significant implications for military organizations seeking to optimize the performance and well-being of their personnel. The findings suggest that integrating resilience training into the training regimen of military athletes can lead to a more resilient and cognitively capable force. This, in turn, may enhance mission success, reduce the risk of injuries, and improve the overall effectiveness of military operations. In conclusion, this study provides compelling evidence that resilience training positively impacts the cognitive performance of military athletes. The preliminary results indicate improvements in attention, decision-making, and adaptability, as well as increased psychological resilience. As the study progresses and incorporates long-term follow-ups, it is expected to provide valuable insights into the enduring effects of resilience training on the cognitive readiness of military athletes, contributing to the ongoing efforts to optimize military personnel's physical and mental capabilities in the face of ever-evolving challenges.

Keywords: military athletes, cognitive performance, resilience training, cognitive enhancement program

Procedia PDF Downloads 55
31 Optimizing Productivity and Quality through the Establishment of a Learning Management System for an Agency-Based Graduate School

Authors: Maria Corazon Tapang-Lopez, Alyn Joy Dela Cruz Baltazar, Bobby Jones Villanueva Domdom

Abstract:

The requisite for an organization implementing quality management system to sustain its compliance to the requirements and commitment for continuous improvement is even higher. It is expected that the offices and units has high and consistent compliance to the established processes and procedures. The Development Academy of the Philippines has been operating under project management to which is has a quality management certification. To further realize its mandate as a think-tank and capacity builder of the government, DAP expanded its operation and started to grant graduate degree through its Graduate School of Public and Development Management (GSPDM). As the academic arm of the Academy, GSPDM offers graduate degree programs on public management and productivity & quality aligned to the institutional trusts. For a time, the documented procedures and processes of a project management seem to fit the Graduate School. However, there has been a significant growth in the operations of the GSPDM in terms of the graduate programs offered that directly increase the number of students. There is an apparent necessity to align the project management system into a more educational system otherwise it will no longer be responsive to the development that are taking place. The strongly advocate and encourage its students to pursue internal and external improvement to cope up with the challenges of providing quality service to their own clients and to our country. If innovation will not take roots in the grounds of GSPDM, then how will it serve the purpose of “walking the talk”? This research was conducted to assess the diverse flow of the existing internal operations and processes of the DAP’s project management and GSPDM’s school management that will serve as basis to develop a system that will harmonize into one, the Learning Management System. The study documented the existing process of GSPDM following the project management phases of conceptualization & development, negotiation & contracting, mobilization, implementation, and closure into different flow charts of the key activities. The primary source of information as respondents were the different groups involved into the delivery of graduate programs - the executive, learning management team and administrative support offices. The Learning Management System (LMS) shall capture the unique and critical processes of the GSPDM as a degree-granting unit of the Academy. The LMS is the harmonized project management and school management system that shall serve as the standard system and procedure for all the programs within the GSPDM. The unique processes cover the three important areas of school management – student, curriculum, and faculty. The required processes of these main areas such as enrolment, course syllabus development, and faculty evaluation were appropriately placed within the phases of the project management system. Further, the research shall identify critical reports and generate manageable documents and records to ensure accuracy, consistency and reliable information. The researchers had an in-depth review of the DAP-GSDPM’s mandate, analyze the various documents, and conducted series of focused group discussions. A comprehensive review on flow chart system prior and various models of school management systems were made. Subsequently, the final output of the research is a work instructions manual that will be presented to the Academy’s Quality Management Council and eventually an additional scope for ISO certification. The manual shall include documented forms, iterative flow charts and program Gantt chart that will have a parallel development of automated systems.

Keywords: productivity, quality, learning management system, agency-based graduate school

Procedia PDF Downloads 296
30 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis

Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam

Abstract:

Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.

Keywords: chlorophyll, gene expression, genetic variation, stomata

Procedia PDF Downloads 213
29 Coastal Foodscapes as Nature-Based Coastal Regeneration Systems

Authors: Gulce Kanturer Yasar, Hayriye Esbah Tuncay

Abstract:

Cultivated food production systems have coexisted harmoniously with nature for thousands of years through ancient techniques. Based on this experience, experimentation, and discovery, these culturally embedded methods have evolved to sustain food production, restore ecosystems, and harmoniously adapt to nature. In this era, as we seek solutions to food security challenges, enhancing and repairing our food production systems is crucial, making them more resilient to future disasters without harming the ecosystem. Instead of unsustainable conventional systems with ongoing destructive effects, we must investigate innovative and restorative production systems that integrate ancient wisdom and technology. Whether we consider agricultural fields, pastures, forests, coastal wetland ecosystems, or lagoons, it is crucial to harness the potential of these natural resources in addressing future global challenges, fostering both socio-economic resilience and ecological sustainability through strategic organization for food production. When thoughtfully designed and managed, marine-based food production has the potential to function as a living infrastructure system that addresses social and environmental challenges despite its known adverse impacts on the environment and local economies. These areas are also stages of daily life, vibrant hubs where local culture is produced and shared, contributing to the distinctive rural character of coastal settlements and exhibiting numerous spatial expressions of public nature. When we consider the history of humanity, indigenous communities have engaged in these sustainable production practices that provide goods for food, trade, culture, and the environment for many ages. Ecosystem restoration and socio-economic resilience can be achieved by combining production techniques based on ecological knowledge developed by indigenous societies with modern technologies. Coastal lagoons are highly productive coastal features that provide various natural services and societal values. They are especially vulnerable to severe physical, ecological, and social impacts of changing, challenging global conditions because of their placement within the coastal landscape. Coastal lagoons are crucial in sustaining fisheries productivity, providing storm protection, supporting tourism, and offering other natural services that hold significant value for society. Although there is considerable literature on the physical and ecological dimensions of lagoons, much less literature focuses on their economic and social values. This study will discuss the possibilities of coastal lagoons to achieve both ecologically sustainable and socio-economically resilient while maintaining their productivity by combining local techniques and modern technologies. The case study will present Turkey’s traditional aquaculture method, "Dalyans," predominantly operated by small-scale farmers in coastal lagoons. Due to human, ecological, and economic factors, dalyans are losing their landscape characteristics and efficiency. These 1000-year-old ancient techniques, rooted in centuries of traditional and agroecological knowledge, are under threat of tourism, urbanization, and unsustainable agricultural practices. Thus, Dalyans have diminished from 29 to approximately 4-5 active Dalyans. To deal with the adverse socio-economic and ecological consequences on Turkey's coastal areas, conserving Dalyans by protecting their indigenous practices while incorporating contemporary methods is essential. This study seeks to generate scenarios that envision the potential ways protection and development can manifest within case study areas.

Keywords: coastal foodscape, lagoon aquaculture, regenerative food systems, watershed food networks

Procedia PDF Downloads 41
28 In-situ Mental Health Simulation with Airline Pilot Observation of Human Factors

Authors: Mumtaz Mooncey, Alexander Jolly, Megan Fisher, Kerry Robinson, Robert Lloyd, Dave Fielding

Abstract:

Introduction: The integration of the WingFactors in-situ simulation programme has transformed the education landscape at the Whittington Health NHS Trust. To date, there have been a total of 90 simulations - 19 aimed at Paediatric trainees, including 2 Child and Adolescent Mental Health (CAMHS) scenarios. The opportunity for joint debriefs provided by clinical faculty and airline pilots, has created a new exciting avenue to explore human factors within psychiatry. Through the use of real clinical environments and primed actors; the benefits of high fidelity simulation, interdisciplinary and interprofessional learning has been highlighted. The use of in-situ simulation within Psychiatry is a newly emerging concept and its success here has been recognised by unanimously positive feedback from participants and acknowledgement through nomination for the Health Service Journal (HSJ) Award (Best Education Programme 2021). Methodology: The first CAMHS simulation featured a collapsed patient in the toilet with a ligature tied around her neck, accompanied by a distressed parent. This required participants to consider:; emergency physical management of the case, alongside helping to contain the mother and maintaining situational awareness when transferring the patient to an appropriate clinical area. The second simulation was based on a 17- year- old girl attempting to leave the ward after presenting with an overdose, posing potential risk to herself. The safe learning environment enabled participants to explore techniques to engage the young person and understand their concerns, and consider the involvement of other members of the multidisciplinary team. The scenarios were followed by an immediate ‘hot’ debrief, combining technical feedback with Human Factors feedback from uniformed airline pilots and clinicians. The importance of psychological safety was paramount, encouraging open and honest contributions from all participants. Key learning points were summarized into written documents and circulated. Findings: The in-situ simulations demonstrated the need for practical changes both in the Emergency Department and on the Paediatric ward. The presence of airline pilots provided a novel way to debrief on Human Factors. The following key themes were identified: -Team-briefing (‘Golden 5 minutes’) - Taking a few moments to establish experience, initial roles and strategies amongst the team can reduce the need for conversations in front of a distressed patient or anxious relative. -Use of checklists / guidelines - Principles associated with checklist usage (control of pace, rigor, team situational awareness), instead of reliance on accurate memory recall when under pressure. -Read-back - Immediate repetition of safety critical instructions (e.g. drug / dosage) to mitigate the risks associated with miscommunication. -Distraction management - Balancing the risk of losing a team member to manage a distressed relative, versus it impacting on the care of the young person. -Task allocation - The value of the implementation of ‘The 5A’s’ (Availability, Address, Allocate, Ask, Advise), for effective task allocation. Conclusion: 100% of participants have requested more simulation training. Involvement of airline pilots has led to a shift in hospital culture, bringing to the forefront the value of Human Factors focused training and multidisciplinary simulation. This has been of significant value in not only physical health, but also mental health simulation.

Keywords: human factors, in-situ simulation, inter-professional, multidisciplinary

Procedia PDF Downloads 85
27 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil

Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes

Abstract:

Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.

Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey

Procedia PDF Downloads 148
26 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach

Authors: Utkarsh A. Mishra, Ankit Bansal

Abstract:

At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.

Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks

Procedia PDF Downloads 191
25 A Systemic Review and Comparison of Non-Isolated Bi-Directional Converters

Authors: Rahil Bahrami, Kaveh Ashenayi

Abstract:

This paper presents a systematic classification and comparative analysis of non-isolated bi-directional DC-DC converters. The increasing demand for efficient energy conversion in diverse applications has spurred the development of various converter topologies. In this study, we categorize bi-directional converters into three distinct classes: Inverting, Non-Inverting, and Interleaved. Each category is characterized by its unique operational characteristics and benefits. Furthermore, a practical comparison is conducted by evaluating the results of simulation of each bi-directional converter. BDCs can be classified into isolated and non-isolated topologies. Non-isolated converters share a common ground between input and output, making them suitable for applications with minimal voltage change. They are easy to integrate, lightweight, and cost-effective but have limitations like limited voltage gain, switching losses, and no protection against high voltages. Isolated converters use transformers to separate input and output, offering safety benefits, high voltage gain, and noise reduction. They are larger and more costly but are essential for automotive designs where safety is crucial. The paper focuses on non-isolated systems.The paper discusses the classification of non-isolated bidirectional converters based on several criteria. Common factors used for classification include topology, voltage conversion, control strategy, power capacity, voltage range, and application. These factors serve as a foundation for categorizing converters, although the specific scheme might vary depending on contextual, application, or system-specific requirements. The paper presents a three-category classification for non-isolated bi-directional DC-DC converters: inverting, non-inverting, and interleaved. In the inverting category, converters produce an output voltage with reversed polarity compared to the input voltage, achieved through specific circuit configurations and control strategies. This is valuable in applications such as motor control and grid-tied solar systems. The non-inverting category consists of converters maintaining the same voltage polarity, useful in scenarios like battery equalization. Lastly, the interleaved category employs parallel converter stages to enhance power delivery and reduce current ripple. This classification framework enhances comprehension and analysis of non-isolated bi-directional DC-DC converters. The findings contribute to a deeper understanding of the trade-offs and merits associated with different converter types. As a result, this work aids researchers, practitioners, and engineers in selecting appropriate bi-directional converter solutions for specific energy conversion requirements. The proposed classification framework and experimental assessment collectively enhance the comprehension of non-isolated bi-directional DC-DC converters, fostering advancements in efficient power management and utilization.The simulation process involves the utilization of PSIM to model and simulate non-isolated bi-directional converter from both inverted and non-inverted category. The aim is to conduct a comprehensive comparative analysis of these converters, considering key performance indicators such as rise time, efficiency, ripple factor, and maximum error. This systematic evaluation provides valuable insights into the dynamic response, energy efficiency, output stability, and overall precision of the converters. The results of this comparison facilitate informed decision-making and potential optimizations, ensuring that the chosen converter configuration aligns effectively with the designated operational criteria and performance goals.

Keywords: bi-directional, DC-DC converter, non-isolated, energy conversion

Procedia PDF Downloads 62
24 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges

Authors: Atefeh Salehipoor

Abstract:

Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.

Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension

Procedia PDF Downloads 54
23 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews

Authors: Merav Alush Levron, Sivan Rajuan Shtang

Abstract:

From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.

Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari

Procedia PDF Downloads 300
22 Cardiolipin-Incorporated Liposomes Carrying Curcumin and Nerve Growth Factor to Rescue Neurons from Apoptosis for Alzheimer’s Disease Treatment

Authors: Yung-Chih Kuo, Che-Yu Lin, Jay-Shake Li, Yung-I Lou

Abstract:

Curcumin (CRM) and nerve growth factor (NGF) were entrapped in liposomes (LIP) with cardiolipin (CL) to downregulate the phosphorylation of mitogen-activated protein kinases for Alzheimer’s disease (AD) management. AD belongs to neurodegenerative disorder with a gradual loss of memory, yielding irreversible dementia. CL-conjugated LIP loaded with CRM (CRM-CL/LIP) and that with NGF (NGF-CL/LIP) were applied to AD models of SK-N-MC cells and Wistar rats with an insult of β-amyloid peptide (Aβ). Lipids comprising 1,2-dipalmitoyl-sn-glycero-3- phosphocholine (Avanti Polar Lipids, Alabaster, AL), 1',3'-bis[1,2- dimyristoyl-sn-glycero-3-phospho]-sn-glycerol (CL; Avanti Polar Lipids), 1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N- [methoxy(polyethylene glycol)-2000] (Avanti Polar Lipids), 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[carboxy(polyethylene glycol)-2000] (Avanti Polar Lipids) and CRM (Sigma–Aldrich, St. Louis, MO) were dissolved in chloroform (J. T. Baker, Phillipsburg, NJ) and condensed using a rotary evaporator (Panchum, Kaohsiung, Taiwan). Human β-NGF (Alomone Lab, Jerusalem, Israel) was added in the aqueous phase. Wheat germ agglutinin (WGA; Medicago AB, Uppsala, Sweden) was grafted on LIP loaded with CRM for (WGA-CRM-LIP) and CL-conjugated LIP loaded with CRM (WGA-CRM-CL/LIP) using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (Sigma–Aldrich) and N-hydroxysuccinimide (Alfa Aesar, Ward Hill, MA). The protein samples of SK-N-MC cells (American Type Tissue Collection, Rockville, MD) were used for sodium dodecyl sulfate (Sigma–Aldrich) polyacrylamide gel (Sigma–Aldrich) electrophoresis. In animal study, the LIP formulations were administered by intravenous injection via a tail vein of male Wistar rats (250–280 g, 8 weeks, BioLasco, Taipei, Taiwan), which were housed in the Animal Laboratory of National Chung Cheng University in accordance with the institutional guidelines and the guidelines of Animal Protection Committee under the Council of Agriculture of the Republic of China. We found that CRM-CL/LIP could inhibit the expressions of phosphorylated p38 (p-p38), p-Jun N-terminal kinase (p-JNK), and p-tau protein at serine 202 (p-Ser202) to retard the neuronal apoptosis. Free CRM and released CRM from CRM-LIP and CRM-CL/LIP were not in a straightforward manner to effectively inhibit the expression of p-p38 and p-JNK in the cytoplasm. In addition, NGF-CL/LIP enhanced the quantities of p-neurotrophic tyrosine kinase receptor type 1 (p-TrkA) and p-extracellular-signal-regulated kinase 5 (p-ERK5), preventing the Aβ-induced degeneration of neurons. The membrane fusion of NGF-LIP activated the ERK5 pathway and the targeting capacity of NGF-CL/LIP enhanced the possibility of released NGF to affect the TrkA level. Moreover, WGA-CRM-LIP improved the permeation of CRM across the blood–brain barrier (BBB) and significantly reduced the Aβ plaque deposition and malondialdehyde level and increased the percentage of normal neurons and cholinergic function in the hippocampus of AD rats. This was mainly because the encapsulated CRM was protected by LIP against a rapid degradation in the blood. Furthermore, WGA on LIP could target N-acetylglucosamine on endothelia and increased the quantity of CRM transported across the BBB. In addition, WGA-CRM-CL/LIP could be effective in suppressing the synthesis of acetylcholinesterase and reduced the decomposition of acetylcholine for better neurotransmission. Based on the in vitro and in vivo evidences, WGA-CRM-CL/LIP can rescue neurons from apoptosis in the brain and can be a promising drug delivery system for clinical AD therapy.

Keywords: Alzheimer’s disease, β-amyloid, liposome, mitogen-activated protein kinase

Procedia PDF Downloads 311
21 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism

Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman

Abstract:

Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.

Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator

Procedia PDF Downloads 349
20 The Proposal for a Framework to Face Opacity and Discrimination ‘Sins’ Caused by Consumer Creditworthiness Machines in the EU

Authors: Diogo José Morgado Rebelo, Francisco António Carneiro Pacheco de Andrade, Paulo Jorge Freitas de Oliveira Novais

Abstract:

Not everything in AI-power consumer credit scoring turns out to be a wonder. When using AI in Creditworthiness Assessment (CWA), opacity and unfairness ‘sins’ must be considered to the task be deemed Responsible. AI software is not always 100% accurate, which can lead to misclassification. Discrimination of some groups can be exponentiated. A hetero personalized identity can be imposed on the individual(s) affected. Also, autonomous CWA sometimes lacks transparency when using black box models. However, for this intended purpose, human analysts ‘on-the-loop’ might not be the best remedy consumers are looking for in credit. This study seeks to explore the legality of implementing a Multi-Agent System (MAS) framework in consumer CWA to ensure compliance with the regulation outlined in Article 14(4) of the Proposal for an Artificial Intelligence Act (AIA), dated 21 April 2021 (as per the last corrigendum by the European Parliament on 19 April 2024), Especially with the adoption of Art. 18(8)(9) of the EU Directive 2023/2225, of 18 October, which will go into effect on 20 November 2026, there should be more emphasis on the need for hybrid oversight in AI-driven scoring to ensure fairness and transparency. In fact, the range of EU regulations on AI-based consumer credit will soon impact the AI lending industry locally and globally, as shown by the broad territorial scope of AIA’s Art. 2. Consequently, engineering the law of consumer’s CWA is imperative. Generally, the proposed MAS framework consists of several layers arranged in a specific sequence, as follows: firstly, the Data Layer gathers legitimate predictor sets from traditional sources; then, the Decision Support System Layer, whose Neural Network model is trained using k-fold Cross Validation, provides recommendations based on the feeder data; the eXplainability (XAI) multi-structure comprises Three-Step-Agents; and, lastly, the Oversight Layer has a 'Bottom Stop' for analysts to intervene in a timely manner. From the analysis, one can assure a vital component of this software is the XAY layer. It appears as a transparent curtain covering the AI’s decision-making process, enabling comprehension, reflection, and further feasible oversight. Local Interpretable Model-agnostic Explanations (LIME) might act as a pillar by offering counterfactual insights. SHapley Additive exPlanation (SHAP), another agent in the XAI layer, could address potential discrimination issues, identifying the contribution of each feature to the prediction. Alternatively, for thin or no file consumers, the Suggestion Agent can promote financial inclusion. It uses lawful alternative sources such as the share of wallet, among others, to search for more advantageous solutions to incomplete evaluation appraisals based on genetic programming. Overall, this research aspires to bring the concept of Machine-Centered Anthropocentrism to the table of EU policymaking. It acknowledges that, when put into service, credit analysts no longer exert full control over the data-driven entities programmers have given ‘birth’ to. With similar explanatory agents under supervision, AI itself can become self-accountable, prioritizing human concerns and values. AI decisions should not be vilified inherently. The issue lies in how they are integrated into decision-making and whether they align with non-discrimination principles and transparency rules.

Keywords: creditworthiness assessment, hybrid oversight, machine-centered anthropocentrism, EU policymaking

Procedia PDF Downloads 14
19 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic

Authors: Monique M. Keller, Roline Barnes, Corlia Brandt

Abstract:

Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.

Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces

Procedia PDF Downloads 168
18 Working at the Interface of Health and Criminal Justice: An Interpretative Phenomenological Analysis Exploration of the Experiences of Liaison and Diversion Nurses – Emerging Findings

Authors: Sithandazile Masuku

Abstract:

Introduction: Public health approaches to offender mental health are driven by international policies and frameworks in response to the disproportionately large representation of people with mental health problems within the offender pathway compared to the general population. Public health service innovations include mental health courts in the US, restorative models in Singapore and, liaison and diversion services in Australia, the UK, and some other European countries. Mental health nurses are at the forefront of offender health service innovations. In the U.K. context, police custody has been identified as an early point within the offender pathway where nurses can improve outcomes by offering assessments and share information with criminal justice partners. This scope of nursing practice has introduced challenges related to skills and support required for nurses working at the interface of health and the criminal justice system. Parallel literature exploring experiences of nurses working in forensic settings suggests the presence of compassion fatigue, burnout and vicarious trauma that may impede risk harm to the nurses in these settings. Published research explores mainly service-level outcomes including monitoring of figures indicative of a reduction in offending behavior. There is minimal research exploring the experiences of liaison and diversion nurses who are situated away from a supportive clinical environment and engaged in complex autonomous decision-making. Aim: This paper will share qualitative findings (in progress) from a PhD study that aims to explore the experiences of liaison and diversion nurses in one service in the U.K. Methodology: This is a qualitative interview study conducted using an Interpretative Phenomenological Analysis to gain an in-depth analysis of lived experiences. Methods: A purposive sampling technique was used to recruit n=8 mental health nurses registered with the UK professional body, Nursing and Midwifery Council, from one UK Liaison and Diversion service. All participants were interviewed online via video call using semi-structured interview topic guide. Data were recorded and transcribed verbatim. Data were analysed using the seven steps of the Interpretative Phenomenological Analysis data analysis method. Emerging Findings Analysis to date has identified pertinent themes: • Difficulties of meaning-making for nurses because of the complexity of their boundary spanning role. • Emotional burden experienced in a highly emotive and fast-changing environment. • Stress and difficulties with role identity impacting on individual nurses’ ability to be resilient. • Challenges to wellbeing related to a sense of isolation when making complex decisions. Conclusion Emerging findings have highlighted the lived experiences of nurses working in liaison and diversion as challenging. The nature of the custody environment has an impact on role identity and decision making. Nurses left feeling isolated and unsupported are less resilient and may go on to experience compassion fatigue. The findings from this study thus far point to a need to connect nurses working in these boundary spanning roles with a supportive infrastructure where the complexity of their role is acknowledged, and they can be connected with a health agenda. In doing this, the nurses would be protected from harm and the likelihood of sustained positive outcomes for service users is optimised.

Keywords: liaison and diversion, nurse experiences, offender health, staff wellbeing

Procedia PDF Downloads 107
17 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 120
16 Evolution of Plio/Pleistocene Sedimentary Processes in Patraikos Gulf, Offshore Western Greece

Authors: E. K. Tripsanas, D. Spanos, I. Oikonomopoulos, K. Stathopoulou, A. S. Abdelsamad, A. Pagoulatos

Abstract:

Patraikos Gulf is located offshore western Greece, and it is limited to the west by the Zante, Cephalonia, and Lefkas islands. The Plio/Pleistocene sequence is characterized by two depocenters, the east and west Patraikos basins separated from each other by a prominent sill. This study is based on the Plio/Pleistocene seismic stratigraphy analysis of a newly acquired 3D PSDM (Pre-Stack depth migration) seismic survey in the west Patraikos Basin and few 2D seismic profiles throughout the entire Patraikos Gulf. The eastern Patraikos Basin, although completely buried today with water depths less than 100 m, it was a deep basin during Pliocene ( > 2 km of Pliocene-Pleistocene sediments) and appears to have gathered most of Achelous River discharges. The west Patraikos Gulf was shallower ( < 1300 m of Pliocene-Pleistocene sediments) and characterized by a hummocky relief due to thrust-belt tectonics and Miocene to Pleistocene halokinetic processes. The transition from Pliocene to Miocene is expressed by a widespread erosional unconformity with evidence of fluvial drainage patterns. This indicates that west Patraikos Basin was aerially exposed during the Messinian Salinity Crisis. Continuous to semi-continuous, parallel reflections in the lower, early- to mid-Pliocene seismic packet provides evidence that the re-connection of the Mediterranean Sea with the Atlantic Ocean during Zanclean resulted in the flooding of the west Patraikos basin and the domination of hemipelagic sedimentation interrupted by occasional gravity flows. This is evident in amplitude and semblance horizon slices, which clearly show the presence of long-running, meandering submarine channels sourced from the southeast (northwest Peloponnese) and north. The long-running nature of the submarine channels suggests mobile efficient turbidity currents, probably due to the participation of a sufficient amount of clay minerals in their suspended load. The upper seismic section in the study area mainly consists of several successions of clinoforms, interpreted as progradational delta complexes of Achelous River. This sudden change from marine to shallow marine sedimentary processes is attributed to climatic changes and eustatic perturbations since late Pliocene onwards (~ 2.6 Ma) and/or a switch of Achelous River from the east Patraikos Basin to the west Patraikos Basin. The deltaic seismic unit consists of four delta complexes. The first two complexes result in the infill of topographic depressions and smoothing of an initial hummocky bathymetry. The distribution of the upper two delta complexes is controlled by compensational stacking. Amplitude and semblance horizon slices depict the development of several almost straight and short (a few km long) distributary submarine channels at the delta slopes and proximal prodeltaic plains with lobate sand-sheet deposits at their mouths. Such channels are interpreted to result from low-efficiency turbidity currents with low content in clay minerals. Such a differentiation in the nature of the gravity flows is attributed to the switch of the sediment supply from clay-rich sediments derived from the draining of flysch formations of the Ionian and Gavrovo zones, to the draining of poor in clay minerals carbonate formations of Gavrovo zone through the Achelous River.

Keywords: sequence stratigraphy, basin analysis, river deltas, submarine channels

Procedia PDF Downloads 301
15 Stabilizing Additively Manufactured Superalloys at High Temperatures

Authors: Keivan Davami, Michael Munther, Lloyd Hackel

Abstract:

The control of properties and material behavior by implementing thermal-mechanical processes is based on mechanical deformation and annealing according to a precise schedule that will produce a unique and stable combination of grain structure, dislocation substructure, texture, and dispersion of precipitated phases. The authors recently developed a thermal-mechanical technique to stabilize the microstructure of additively manufactured nickel-based superalloys even after exposure to high temperatures. However, the mechanism(s) that controls this stability is still under investigation. Laser peening (LP), also called laser shock peening (LSP), is a shock based (50 ns duration) post-processing technique used for extending performance levels and improving service life of critical components by developing deep levels of plastic deformation, thereby generating high density of dislocations and inducing compressive residual stresses in the surface and deep subsurface of components. These compressive residual stresses are usually accompanied with an increase in hardness and enhance the material’s resistance to surface-related failures such as creep, fatigue, contact damage, and stress corrosion cracking. While the LP process enhances the life span and durability of the material, the induced compressive residual stresses relax at high temperatures (>0.5Tm, where Tm is the absolute melting temperature), limiting the applicability of the technology. At temperatures above 0.5Tm, the compressive residual stresses relax, and yield strength begins to drop dramatically. The principal reason is the increasing rate of solid-state diffusion, which affects both the dislocations and the microstructural barriers. Dislocation configurations commonly recover by mechanisms such as climbing and recombining rapidly at high temperatures. Furthermore, precipitates coarsen, and grains grow; virtually all of the available microstructural barriers become ineffective.Our results indicate that by using “cyclic” treatments with sequential LP and annealing steps, the compressive stresses survive, and the microstructure is stable after exposure to temperatures exceeding 0.5Tm for a long period of time. When the laser peening process is combined with annealing, dislocations formed as a result of LPand precipitates formed during annealing have a complex interaction that provides further stability at high temperatures. From a scientific point of view, this research lays the groundwork for studying a variety of physical, materials science, and mechanical engineering concepts. This research could lead to metals operating at higher sustained temperatures enabling improved system efficiencies. The strengthening of metals by a variety of means (alloying, work hardening, and other processes) has been of interest for a wide range of applications. However, the mechanistic understanding of the often complex processes of interactionsbetween dislocations with solute atoms and with precipitates during plastic deformation have largely remained scattered in the literature. In this research, the elucidation of the actual mechanisms involved in the novel cyclic LP/annealing processes as a scientific pursuit is investigated through parallel studies of dislocation theory and the implementation of advanced experimental tools. The results of this research help with the validation of a novel laser processing technique for high temperature applications. This will greatly expand the applications of the laser peening technology originally devised only for temperatures lower than half of the melting temperature.

Keywords: laser shock peening, mechanical properties, indentation, high temperature stability

Procedia PDF Downloads 126