Search results for: opposition based learning
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31728

Search results for: opposition based learning

18018 Handy EKG: Low-Cost ECG For Primary Care Screening In Developing Countries

Authors: Jhiamluka Zservando Solano Velasquez, Raul Palma, Alejandro Calderon, Servio Paguada, Erick Marin, Kellyn Funes, Hana Sandoval, Oscar Hernandez

Abstract:

Background: Screening cardiac conditions in primary care in developing countries can be challenging, and Honduras is not the exception. One of the main limitations is the underfunding of the Healthcare System in general, causing conventional ECG acquisition to become a secondary priority. Objective: Development of a low-cost ECG to improve screening of arrhythmias in primary care and communication with a specialist in secondary and tertiary care. Methods: Design a portable, pocket-size low-cost 3 lead ECG (Handy EKG). The device is autonomous and has Wi-Fi/Bluetooth connectivity options. A mobile app was designed which can access online servers with machine learning, a subset of artificial intelligence to learn from the data and aid clinicians in their interpretation of readings. Additionally, the device would use the online servers to transfer patient’s data and readings to a specialist in secondary and tertiary care. 50 randomized patients volunteer to participate to test the device. The patients had no previous cardiac-related conditions, and readings were taken. One reading was performed with the conventional ECG and 3 readings with the Handy EKG using different lead positions. This project was possible thanks to the funding provided by the National Autonomous University of Honduras. Results: Preliminary results show that the Handy EKG performs readings of the cardiac activity similar to those of a conventional electrocardiograph in lead I, II, and III depending on the position of the leads at a lower cost. The wave and segment duration, amplitude, and morphology of the readings were similar to the conventional ECG, and interpretation was possible to conclude whether there was an arrhythmia or not. Two cases of prolonged PR segment were found in both ECG device readings. Conclusion: Using a Frugal innovation approach can allow lower income countries to develop innovative medical devices such as the Handy EKG to fulfill unmet needs at lower prices without compromising effectiveness, safety, and quality. The Handy EKG provides a solution for primary care screening at a much lower cost and allows for convenient storage of the readings in online servers where clinical data of patients can then be accessed remotely by Cardiology specialists.

Keywords: low-cost hardware, portable electrocardiograph, prototype, remote healthcare

Procedia PDF Downloads 174
18017 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review

Authors: R. Christopher, C. Brandt, N. Damons

Abstract:

Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.

Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity

Procedia PDF Downloads 170
18016 Value Gaps Between Patients and Doctors

Authors: Yih-Jer Wu, Ling-Lang Huang

Abstract:

Shared decision-making (SDM) is a critical aspect of determining optimal medical strategies. However, current patient decision aids (PDAs) often prioritize evidence-based discussions over value-based considerations. Despite its significance, there is limited research addressing the 'value gap' between patients and healthcare providers. To address this gap, we developed the 'Patient-Doctor Relationship Questionnaire,' consisting of 12 questions. To explore potential variations in the patient-doctor value gap across different medical specialties, we conducted interviews with physicians, surgeons, and their respective patients, utilizing the questionnaire. Between 2020 and 2022, we interviewed a total of 144 patients and 19 doctors. Among the 12 questions, physicians demonstrated significant patient-doctor value gaps in 5 questions, while surgeons in 3 questions. Only one question turned out significant gaps in both physicians and surgeons. When asking both doctors and their patients to choose one from the following 6 answers (1. No issue significant; 2. Not knowing how to make a medical decision; 3. Not confident in the doctor’s clinical judgment; 4. Not knowing how to articulate one’s own condition; 5. Unable to afford medical expenses; 6. Not understanding what doctors explain) in response to the question “what the most significant issue is in the medical consultation”, over 50% of doctors chose “Not knowing how to make a medical decision” (physicians vs. patients, 50% vs. 11%, p=0.046; surgeon vs. patients, 83% vs. 29%, p=0.001), while significantly more patients chose “No issue significant” (10% vs. 52%, p=0.002; 0% vs. 33%, p<0.001, respectively). Our findings indicate that value gaps do exist between patients and doctors and that most patients in Taiwan "fully trust" their doctors' recommendations for medical decisions. However, when treatment outcomes are far from ideal, this overinflated "trust" may turn into frustration, which could become the catalyst for medical disputes. Doctors should spend more time having more effective communication with their patients, particularly regarding potentially dissatisfactory treatment outcomes. This study underscores the substantial variability in the patient-doctor value gap, often overlooked in SDM. Patients from different clinical backgrounds may hold values distinct from those of their healthcare providers. Bridging this value gap is imperative for achieving genuine and effective SDM.

Keywords: share-decision making, value gaps, communication, doctor-patient relationship

Procedia PDF Downloads 43
18015 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models

Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski

Abstract:

Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.

Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling

Procedia PDF Downloads 411
18014 Innovative Screening Tool Based on Physical Properties of Blood

Authors: Basant Singh Sikarwar, Mukesh Roy, Ayush Goyal, Priya Ranjan

Abstract:

This work combines two bodies of knowledge which includes biomedical basis of blood stain formation and fluid communities’ wisdom that such formation of blood stain depends heavily on physical properties. Moreover biomedical research tells that different patterns in stains of blood are robust indicator of blood donor’s health or lack thereof. Based on these valuable insights an innovative screening tool is proposed which can act as an aide in the diagnosis of diseases such Anemia, Hyperlipidaemia, Tuberculosis, Blood cancer, Leukemia, Malaria etc., with enhanced confidence in the proposed analysis. To realize this powerful technique, simple, robust and low-cost micro-fluidic devices, a micro-capillary viscometer and a pendant drop tensiometer are designed and proposed to be fabricated to measure the viscosity, surface tension and wettability of various blood samples. Once prognosis and diagnosis data has been generated, automated linear and nonlinear classifiers have been applied into the automated reasoning and presentation of results. A support vector machine (SVM) classifies data on a linear fashion. Discriminant analysis and nonlinear embedding’s are coupled with nonlinear manifold detection in data and detected decisions are made accordingly. In this way, physical properties can be used, using linear and non-linear classification techniques, for screening of various diseases in humans and cattle. Experiments are carried out to validate the physical properties measurement devices. This framework can be further developed towards a real life portable disease screening cum diagnostics tool. Small-scale production of screening cum diagnostic devices is proposed to carry out independent test.

Keywords: blood, physical properties, diagnostic, nonlinear, classifier, device, surface tension, viscosity, wettability

Procedia PDF Downloads 371
18013 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks

Authors: Antonio Pizzarello, Oris Friesen

Abstract:

Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.

Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition

Procedia PDF Downloads 219
18012 Job Stress Among the Nurses of the Emergency Department of Selected Saudi Hospital

Authors: Mahmoud Abdel Hameed Shahin

Abstract:

Job demands that are incompatible with an employee's skills, resources, or needs cause unpleasant emotional and physical reactions known as job stress. Nurses offer care in hospital emergency rooms all around the world, and since they operate in such a dynamic and unpredictable setting, they are constantly under pressure. It has been discovered that job stress has harmful impacts on nurses' health as well as their capacity to handle the demands of their jobs. The purpose of this study was to evaluate the level of job stress experienced by the emergency department nurses at King Fahad Specialist Hospital in Buraidah City, Saudi Arabia. In October 2021, a cross-sectional descriptive study was conducted. 80 nurses were conveniently selected for the study, the bulk of them worked at King Fahad Specialist Hospital's emergency department. An electronic questionnaire with a sociodemographic data sheet and a job stress scale was given to the participating nurses after ethical approval was received from the Ministry of Health's representative bodies. Using SPSS Version 26, both descriptive and inferential statistics were employed to analyze and tabulate the acquired data. According to the findings, the factors that contributed to the most job stress in the clinical setting were having an excessive amount of work to do and working under arbitrary deadlines, whereas the factors that contributed to the least stress were receiving the proper recognition or rewards for good work. In the emergency room of King Fahad Specialist Hospital, nurses had a moderate level of stress (M=3.32 ± 0.567/5). Based on their experience, emergency nurses' levels of job stress varied greatly, with nurses with less than a year of experience notably experiencing the lowest levels of job stress. The amount of job stress did not differ significantly based on the emergency nurses' age, nationality, gender, marital status, position, or level of education. The causes and impact of stress on emergency nurses should be identified and alleviated by hospitals through the implementation of interventional programs.

Keywords: emergency nurses, job pressure, Qassim, Saudi Arabia, job stress

Procedia PDF Downloads 178
18011 The Strategies and Mediating Processes of Learning the Inflectional Morphology in English: A Case Study for Taiwanese English Learners

Authors: Hsiu-Ling Hsu, En-Minh (John) Lan

Abstract:

Pronunciation has received more and more language researchers’ and teachers’ attention because it is important for effective or even successful communication. How to consistently and correctly orally produce verbal morphology, such as English regular past tense inflection, has been a big challenge and troublesome for FL learners. The research aims to explore EFL (English as a foreign language) learners’ developmental trajectory of the inflectional morphology, that is, what mediating processes and strategies EFL learners use, to attain native-like prosodic structure of inflectional morphemes (e.g., –ed and –s suffixes) by comparing the differences among EFL learners at different English levels. This research adopted a self-repair analysis and Prosodic Transfer Hypothesis with three developmental stages as a theoretical framework. To answer the research questions, we conducted two experiments, grammatical tense test written production (Experiment 1) and read-aloud oral production (Experiment 2), and recruited 30 participants who were divided into three groups, low-, middle-, and advanced EFL learners. Experiment 1 was conducted to ensure that participants had learned the knowledge of forming the English regular past tense rules and Experiment 2 was carried out to compare the data across FL English learner groups at different English levels. The EFL learners’ self-repair data showed at least four interesting findings. First, low achievers were more sensitive to the plural suffix -s than the past tense suffix -ed. Middle achievers exhibited a greater responsiveness to the past tense suffix, while high achievers demonstrated equal sensitivity to both suffixes. Additionally, two strategies used by EFL English learners to produce verbs and nouns with inflectional morphemes were to delete internal syllable and to divide a four-syllable verb (e.g., ‘graduated’) into two prosodic structures (e.g., ‘gradu’ and ‘ated’ or ‘gradua’ and ‘ted’). Third, true vowel epenthesis was found only in the low EFL achievers. Moreover fortition (native-like sound) was observed in the low and middle EFL achievers. These findings and self-repair data disclosed mediating processes between the developmental stages and provided insight on how Taiwan EFL learners attained the adjunction prosodic structures of inflectional Morphemes in English.

Keywords: inflectional morphology, prosodic structure, developmental trajectory, strategies and mediating processes, English as a foreign language

Procedia PDF Downloads 59
18010 COVID-19 Laws and Policy: The Use of Policy Surveillance For Better Legal Preparedness

Authors: Francesca Nardi, Kashish Aneja, Katherine Ginsbach

Abstract:

The COVID-19 pandemic has demonstrated both a need for evidence-based and rights-based public health policy and how challenging it can be to make effective decisions with limited information, evidence, and data. The O’Neill Institute, in conjunction with several partners, has been working since the beginning of the pandemic to collect, analyze, and distribute critical data on public health policies enacted in response to COVID-19 around the world in the COVID-19 Law Lab. Well-designed laws and policies can help build strong health systems, implement necessary measures to combat viral transmission, enforce actions that promote public health and safety for everyone, and on the individual level have a direct impact on health outcomes. Poorly designed laws and policies, on the other hand, can fail to achieve the intended results and/or obstruct the realization of fundamental human rights, further disease spread, or cause unintended collateral harms. When done properly, laws can provide the foundation that brings clarity to complexity, embrace nuance, and identifies gaps of uncertainty. However, laws can also shape the societal factors that make disease possible. Law is inseparable from the rest of society, and COVID-19 has exposed just how much laws and policies intersects all facets of society. In the COVID-19 context, evidence-based and well-informed law and policy decisions—made at the right time and in the right place—can and have meant the difference between life or death for many. Having a solid evidentiary base of legal information can promote the understanding of what works well and where, and it can drive resources and action to where they are needed most. We know that legal mechanisms can enable nations to reduce inequities and prepare for emerging threats, like novel pathogens that result in deadly disease outbreaks or antibiotic resistance. The collection and analysis of data on these legal mechanisms is a critical step towards ensuring that legal interventions and legal landscapes are effectively incorporated into more traditional kinds of health science data analyses. The COVID-19 Law Labs see a unique opportunity to collect and analyze this kind of non-traditional data to inform policy using laws and policies from across the globe and across diseases. This global view is critical to assessing the efficacy of policies in a wide range of cultural, economic, and demographic circumstances. The COVID-19 Law Lab is not just a collection of legal texts relating to COVID-19; it is a dataset of concise and actionable legal information that can be used by health researchers, social scientists, academics, human rights advocates, law and policymakers, government decision-makers, and others for cross-disciplinary quantitative and qualitative analysis to identify best practices from this outbreak, and previous ones, to be better prepared for potential future public health events.

Keywords: public health law, surveillance, policy, legal, data

Procedia PDF Downloads 136
18009 Conservation and Restoration of Biodiversity in Khagrachari

Authors: Anima Ashraf

Abstract:

Over the past few decades biodiversity has become the issue of global concern for its rapid reduction worldwide. Bangladesh is no exception. The country is exceptionally endowed with a vast variety of flora and fauna, but due to tremendous population pressure, rural poverty and unemployment it has been decreased alarmingly. Since, both biodiversity and sustainable development are the part of human life in modern era and both work together to make our life safer and comfortable therefore balance should be kept in development and biodiversity conservation and priority should be given to alternative and sustainable development paths. This paper is based on study of two projects undertaken by Arannayk Foundation jointly with its local NGO partners. The aim was to understand previous, current and future scenarios for the hilly biodiversity of Khagrachari in the Chittagong Hill Tracts (CHT) of Bangladesh. It is also observed how alternative income generating activities (AIGA) improve livelihood of the tribal inhabitants of the area, decrease their dependency on forest resources and also aid conservation activities. Intensive field visits were made and interviews were conducted with key informants to see the progress and achievements of local NGOs working with the tribal community for the past seven years to restore the denuded hills of Khagrachari. The paper also covers the impacts and interventions of the projects and the methods used to aid conservation activities. Raising awareness among the villagers has reduced extraction of forests resources by 47% and granting funds and access to microcredit to adopt AIGAs have increased their average annual income by 25%. Finally, the paper concludes that effective community-based conservation practices are fundamental to ensure biodiversity conservation in the Chittagong Hill Tracts. In order to conserve biodiversity and restore the forests of CHT, livelihood development of the villagers has to be considered as the main component of the projects undertaken by all NGOs and the Government.

Keywords: biodiversity, conservation, forests, livelihood

Procedia PDF Downloads 269
18008 Pyramid of Deradicalization: Causes and Possible Solutions

Authors: Ashir Ahmed

Abstract:

Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.

Keywords: deradicalization, framework, terrorism, violent extremism

Procedia PDF Downloads 259
18007 Anxiety Treatment: Comparing Outcomes by Different Types of Providers

Authors: Melissa K. Hord, Stephen P. Whiteside

Abstract:

With lifetime prevalence rates ranging from 6% to 15%, anxiety disorders are among the most common childhood mental health diagnoses. Anxiety disorders diagnosed in childhood generally show an unremitting course, lead to additional psychopathology and interfere with social, emotional, and academic development. Effective evidence-based treatments include cognitive-behavioral therapy (CBT) and selective serotonin reuptake inhibitors (SSRI’s). However, if anxious children receive any treatment, it is usually through primary care, typically consists of medication, and very rarely includes evidence-based psychotherapy. Despite the high prevalence of anxiety disorders, there have only been two independent research labs that have investigated long-term results for CBT treatment for all childhood anxiety disorders and two for specific anxiety disorders. Generally, the studies indicate that the majority of youth maintain gains up to 7.4 years after treatment. These studies have not been replicated. In addition, little is known about the additional mental health care received by these patients in the intervening years after anxiety treatment, which seems likely to influence maintenance of gains for anxiety symptoms as well as the development of additional psychopathology during the subsequent years. The original sample consisted of 335 children ages 7 to 17 years (mean 13.09, 53% female) diagnosed with an anxiety disorder in 2010. Medical record review included provider billing records for mental health appointments during the five years after anxiety treatment. The subsample for this study was classified into three groups: 64 children who received CBT in an anxiety disorders clinic, 56 who received treatment from a psychiatrist, and 10 who were seen in a primary care setting. Chi-square analyses resulted in significant differences in mental health care utilization across the five years after treatment. Youth receiving treatment in primary care averaged less than one appointment each year and the appointments continued at the same rate across time. Children treated by a psychiatrist averaged approximately 3 appointments in the first two years and 2 in the subsequent three years. Importantly, youth treated in the anxiety clinic demonstrated a gradual decrease in mental health appointments across time. The nuanced differences will be presented in greater detail. The results of the current study have important implications for developing dissemination materials to help guide parents when they are selecting treatment for their children. By including all mental health appointments, this study recognizes that anxiety is often comorbid with additional diagnoses and that receiving evidence-based treatment may have long-term benefits that are associated with improvements in broader mental health. One important caveat might be that the acuity of mental health influenced the level of care sought by patients included in this study; however, taking this possibility into account, it seems those seeking care in a primary care setting continued to require similar care at the end of the study, indicating little improvement in symptoms was experienced.

Keywords: anxiety, children, mental health, outcomes

Procedia PDF Downloads 262
18006 Factors of Divergence of Shari’Ah Supervisory Opinions and Its Effects on the Harmonization of Islamic Banking Products and Services

Authors: Dlir Abdullah Ahmed

Abstract:

Overall aims of this study are to investigate the effects of differences of opinions among Shari’ah supervisory bodies on standardization and internationalization of Islamic banking products and services. The study has used semi-structured in-depth interview where five respondents from both the Middle East and Malaysia Shari’ah advisors participated in the interview sessions. The data were analyzed by both manual and software techniques. The findings reveal that indeed there are differences of opinions among Shari’ah advisors in different jurisdictions. These differences are due to differences in educational background, schools of thoughts, environment in which they operate, and legal requirements. Moreover, the findings also reveal that these differences in opinions among Shari’ah bodies create confusions among public and bankers, and negatively affect standardization of Islamic banking transactions. In addition, the study has explored the possibility to develop Islamic-based products. However, the finding shows that it is difficult for the industry to have Islamic-based products due to high competition from conventional counterpart, legal constraints and moral hazard. Furthermore, the findings indicate that lack of political will and unity, lack of technology are the main constraints to internationalization of Islamic banking products. Last but not least, the study found that there are possibility of convergence of opinions, standardization of Islamic banking products and services if there are unified international Shari’ah h advisory council, international basic requirements for Islamic Shari’ah h advisors, and increase training and educations of Islamic bankers. This study has several implications to the bankers, policymakers and researchers. The policymakers should be able to resolve their political differences and set up unified international advisory council and international research and development center. The bankers should increase training and educations of the workforce as well improve on their banking infrastructure to facility cross-border transactions.

Keywords: Shari’ah h views, Islamic banking, products & services, standardization.

Procedia PDF Downloads 63
18005 “The Day I Became a Woman” by Marziyeh Meshkiny: An Analysis of the Cinematographic Image of the Middle East

Authors: Ana Carolina Domingues

Abstract:

This work presents the preliminary results of the above-titled doctoral research. Based on this film and on Middle East authors who discuss films made by women, it has been concluded so far, that it is part of a larger movement, which together with other productions, show the perceptions of the world of these women, who see the world otherwise, for not holding positions of power. These modes of perception revealed from the encounter of women with the cameras, educate viewers to denaturalize the impressions constructed in relation to the Middle East.

Keywords: cinema, image, middle east, women

Procedia PDF Downloads 109
18004 Temperature-Based Detection of Initial Yielding Point in Loading of Tensile Specimens Made of Structural Steel

Authors: Aqsa Jamil, Tamura Hiroshi, Katsuchi Hiroshi, Wang Jiaqi

Abstract:

The yield point represents the upper limit of forces which can be applied to a specimen without causing any permanent deformation. After yielding, the behavior of the specimen suddenly changes, including the possibility of cracking or buckling. So, the accumulation of damage or type of fracture changes depending on this condition. As it is difficult to accurately detect yield points of the several stress concentration points in structural steel specimens, an effort has been made in this research work to develop a convenient technique using thermography (temperature-based detection) during tensile tests for the precise detection of yield point initiation. To verify the applicability of thermography camera, tests were conducted under different loading conditions and measuring the deformation by installing various strain gauges and monitoring the surface temperature with the help of a thermography camera. The yield point of specimens was estimated with the help of temperature dip, which occurs due to the thermoelastic effect during the plastic deformation. The scattering of the data has been checked by performing a repeatability analysis. The effects of temperature imperfection and light source have been checked by carrying out the tests at daytime as well as midnight and by calculating the signal to noise ratio (SNR) of the noised data from the infrared thermography camera, it can be concluded that the camera is independent of testing time and the presence of a visible light source. Furthermore, a fully coupled thermal-stress analysis has been performed by using Abaqus/Standard exact implementation technique to validate the temperature profiles obtained from the thermography camera and to check the feasibility of numerical simulation for the prediction of results extracted with the help of the thermographic technique.

Keywords: signal to noise ratio, thermoelastic effect, thermography, yield point

Procedia PDF Downloads 99
18003 Effect of Compaction Method on the Mechanical and Anisotropic Properties of Asphalt Mixtures

Authors: Mai Sirhan, Arieh Sidess

Abstract:

Asphaltic mixture is a heterogeneous material composed of three main components: aggregates; bitumen and air voids. The professional experience and scientific literature categorize asphaltic mixture as a viscoelastic material, whose behavior is determined by temperature and loading rate. Properties characterization of the asphaltic mixture used under the service conditions is done by compacting and testing cylindric asphalt samples in the laboratory. These samples must resemble in a high degree internal structure of the mixture achieved in service, and the mechanical characteristics of the compacted asphalt layer in the pavement. The laboratory samples are usually compacted in temperatures between 140 and 160 degrees Celsius. In this temperature range, the asphalt has a low degree of strength. The laboratory samples are compacted using the dynamic or vibrational compaction methods. In the compaction process, the aggregates tend to align themselves in certain directions that lead to anisotropic behavior of the asphaltic mixture. This issue has been studied in the Strategic Highway Research Program (SHRP) research, that recommended using the gyratory compactor based on the assumption that this method is the best in mimicking the compaction in the service. In Israel, the Netivei Israel company is considering adopting the Gyratory Method as a replacement for the Marshall method used today. Therefore, the compatibility of the Gyratory Method for the use with Israeli asphaltic mixtures should be investigated. In this research, we aimed to examine the impact of the compaction method used on the mechanical characteristics of the asphaltic mixtures and to evaluate the degree of anisotropy in relation to the compaction method. In order to carry out this research, samples have been compacted in the vibratory and gyratory compactors. These samples were cylindrically cored both vertically (compaction wise) and horizontally (perpendicular to compaction direction). These models were tested under dynamic modulus and permanent deformation tests. The comparable results of the tests proved that: (1) specimens compacted by the vibratory compactor had higher dynamic modulus values than the specimens compacted by the gyratory compactor (2) both vibratory and gyratory compacted specimens had anisotropic behavior, especially in high temperatures. Also, the degree of anisotropy is higher in specimens compacted by the gyratory method. (3) Specimens compacted by the vibratory method that were cored vertically had the highest resistance to rutting. On the other hand, specimens compacted by the vibratory method that were cored horizontally had the lowest resistance to rutting. Additionally (4) these differences between the different types of specimens rise mainly due to the different internal arrangement of aggregates resulting from the compaction method. (5) Based on the initial prediction of the performance of the flexible pavement containing an asphalt layer having characteristics based on the results achieved in this research. It can be concluded that there is a significant impact of the compaction method and the degree of anisotropy on the strains that develop in the pavement, and the resistance of the pavement to fatigue and rutting defects.

Keywords: anisotropy, asphalt compaction, dynamic modulus, gyratory compactor, mechanical properties, permanent deformation, vibratory compactor

Procedia PDF Downloads 113
18002 Hot Cracking Susceptibility Evaluation of the Advanced UNS S31035 Austenitic Stainless Steel by Varestraint Weldability Testing

Authors: Mikael M. Johansson, Peter Stenvall, Leif Karlsson, Joel Andersson

Abstract:

Sandvik Sanicro 25, UNS S31035, is an advanced high temperature austenitic stainless steel that potentially can be used in super-heaters and reheaters in the next generation of advanced ultra-super critical power plants. The material possesses both high creep strength and good corrosion resistance at temperatures up to 700°C. Its high temperature properties are positioned between other commercially available high temperature austenitic stainless steels and nickel-based alloys. It is, however, well known that an austenitic solidification mode combined with a fully austenitic microstructure exacerbate susceptibility towards hot cracking. The problem increases even more for thick walled material in multipass welding and could compromise the integrity of the welded component. Varestraint weldability testing is commonly used to evaluate susceptibility towards hot cracking of materials. In this paper, Varestraint test results are evaluated for base material of both UNS S31035 steel and are compared to those of the well-known and well-characterized UNS S31008 grade. The more creep resistant alloy, UNS S31035, is metallurgically more complicated than the UNS S31008 grade and has additions of several alloying elements to improve its high temperature properties. It benefits from both solid solution hardening as well as precipitation hardening. This investigation therefore attempts, based on the Varestraint weldability test, to understand if there are any differences in cracking mechanisms between these two grades due to the additional alloying elements used in UNS S31035. Results from Varestraint testing and crack type investigations will be presented and discussed in some detail. It is shown that hot cracking susceptibility of the UNS S31035 steel is only slightly higher than that of UNS S31008 despite the more complicated metallurgy. Weldability of the two alloys is therefore judged to be comparable making the newer alloy well suited also for critical applications.

Keywords: austenitic stainless steel, hot cracking susceptibility, UNS S31035, UNS S31008, varestraint weldability testing

Procedia PDF Downloads 126
18001 Effect of Low to Moderate Altitude on Football Performance: An Analysis of Thirteen Seasons in the South African Premier Soccer League

Authors: Khatija Bahdur, Duane Dell’Oca

Abstract:

There is limited information on how altitude impacts performance in a team sport. Most altitude research in football has been conducted at high elevation ( > 2500m), resulting in a chasm of understanding whether low to moderate altitude affects performance. The South African Premier Soccer League (PSL) fixtures entail matches played at altitudes from sea level to 1700m above mean sea level. Despite coaches highlighting the effect of altitude on performance outcomes in matches, further research is needed to establish whether altitude does impact match results. Greater insight into if and how altitude impacts performance in the PSL will assist coaches in deciding if and how to incorporate altitude in their planning. The purpose of this study is to fill in this gap through the use of a retrospective analysis of PSL matches. This quantitative study is based on a descriptive analysis of 181 PSL matches involving one team based at sea-level, taking place over a period of thirteen seasons. The following data were obtained: altitude at which the match was played, match result, the timing of goals, and timing of substitutions. The altitude was classified in 2 ways: inland ( > 500m) and coastal ( < 500m) and also further subdivided into narrower categories ( < 500m, 500-1000m, 1000-1300m; 1300-1500m, > 1500m). The analysis included a 2-sample t-test to determine differences in total goals scored and timing of goals for inland and coastal matches and the chi-square test to identify the significance of altitude on match results. The level of significance was set at the alpha level of 0.05. Match results are significantly affected by the altitude and level of altitude within inland teams most likely to win when playing at inland venues (p=0.000). The proportion of draws was slightly higher at the coast. At altitudes between 500-1000m, 1300-1500m, and 1500-1700m, a greater percentage of matches were won by coastal teams as opposed to draws. The timing of goals varied based on the team’s base altitude and the match elevation. The most significant differences were between 36-40 minutes (p=0.023), 41-45 minutes (p=0.000) and 50-65 minutes (p=0.000). When breaking down inland team’s matches to different altitude categories, greater differences were highlighted. Inland teams scored more goals per minute between 10-20 minute (p=0.009), 41-45 minutes (p=0.003) and 50-65 minutes (p=0.015). The total number of goals scored per match at different altitudes by a) inland teams (p=0.000), b) coastal teams (p=0.006). Coastal teams made significantly more substitutions when playing at altitude (p=0.034), although there were no significant differences when comparing the different altitude categories. The timing of all three changes, however, did vary significantly at the different altitudes. There were no significant differences in timing or number of substitutions for inland teams. Match results and timing of goals are influenced by altitude, with differences between the level of altitude also playing a role. The trends indicate that inland teams win more matches when playing at altitude against coastal teams, and they score more goals just prior to half-time and in the first quarter of the second half.

Keywords: coastal teams, inland teams, timing of goals, results, substitutions

Procedia PDF Downloads 128
18000 Biological Optimization following BM-MSC Seeding of Partially Demineralized and Partially Demineralized Laser-Perforated Structural Bone Allografts Implanted in Critical Femoral Defects

Authors: S. AliReza Mirghasemi, Zameer Hussain, Mohammad Saleh Sadeghi, Narges Rahimi Gabaran, Mohamadreza Baghaban Eslaminejad

Abstract:

Background: Despite promising results have shown by osteogenic cell-based demineralized bone matrix composites, they need to be optimized for grafts that act as structural frameworks in load-bearing defects. The purpose of this experiment is to determine the effect of bone-marrow-mesenchymal-stem-cells seeding on partially demineralized laser-perforated structural allografts that have been implanted in critical femoral defects. Materials and Methods: P3 stem cells were used for graft seeding. Laser perforation in four rows of three holes was achieved. Cell-seeded grafts were incubated for one hour until they were planted into the defect. We used four types of grafts: partially demineralized only (Donly), partially demineralized stem cell seeded (DST), partially demineralized laser-perforated (DLP), and partially demineralized laser-perforated stem cell seeded (DLPST). histologic and histomorphometric analysis were performed at 12 weeks. Results: Partially demineralized laser-perforated had the highest woven bone formation within graft limits, stem cell seeded demineralized laser-perforated remained intact, and the difference between partially demineralized only and partially demineralized stem cell seeded was insignificant. At interface, partially demineralized laser-perforated and partially demineralized only had comparable osteogenesis, but partially demineralized stem cell seeded was inferior. The interface in stem cell seeded demineralized laser-perforated was almost replaced by distinct endochondral osteogenesis with higher angiogenesis in the vicinity. Partially demineralized stem cell seeded and stem cell seeded demineralized laser-perforated graft surfaces had extra vessel-ingrowth-like porosities, a sign of delayed resorption. Conclusion: This demonstrates that simple cell-based composites are not optimal and necessitates the supplementation of synergistic stipulations and surface changes.

Keywords: structural bone allograft, partial demineralization, laser perforation, mesenchymal stem cell

Procedia PDF Downloads 406
17999 Analysis of Pavement Lifespan - Cost and Emissions of Greenhouse Gases: A Comparative Study of 10-year vs 30-year Design

Authors: Claudeny Simone Alves Santana, Alexandre Simas De Medeiros, Marcelino Aurélio Vieira Da Silva

Abstract:

The aim of the study was to assess the performance of pavements over time, considering the principles of Life Cycle Assessment (LCA) and the ability to withstand vehicle loads and associated environmental impacts. Within the study boundary, pavement design was conducted using the Mechanistic-Empirical Method, adopting criteria based on pavement cracking and wheel path rutting while also considering factors such as soil characteristics, material thickness, and the distribution of forces exerted by vehicles. The Ecoinvent® 3.6 database and SimaPro® software were employed to calculate emissions, and SICRO 3 information was used to estimate costs. Consequently, the study sought to identify the service that had the greatest impact on greenhouse gas emissions. The results were compared for design life periods of 10 and 30 years, considering structural performance and load-bearing capacity. Additionally, environmental impacts in terms of CO2 emissions per standard axle and construction costs in dollars per standard axle were analyzed. Based on the conducted analyses, it was possible to determine which pavement exhibited superior performance over time, considering technical, environmental, and economic criteria. One of the findings indicated that the mechanical characteristics of the soils used in the pavement layer directly influence the thickness of the pavement and the quantity of greenhouse gases, with a difference of approximately 7000 Kg CO2 Eq. The transportation service was identified as having the most significant negative impact. Other notable observations are that the study can contribute to future project guidelines and assist in decision-making regarding the selection of the most suitable pavement in terms of durability, load-bearing capacity, and sustainability.

Keywords: life cycle assessment, greenhouse gases, urban paving, service cost

Procedia PDF Downloads 64
17998 CO₂ Storage Capacity Assessment of Deep Saline Aquifers in Malaysia

Authors: Radzuan Junin, Dayang Zulaika A. Hasbollah

Abstract:

The increasing amount of greenhouse gasses in the atmosphere recently has become one of the discussed topics in relation with world’s concern on climate change. Developing countries’ emissions (such as Malaysia) are now seen to surpass developed country’s emissions due to rapid economic development growth in recent decades. This paper presents the potential storage sites suitability and storage capacity assessment for CO2 sequestration in sedimentary basins of Malaysia. This study is the first of its kind that made an identification of potential storage sites and assessment of CO2 storage capacity within the deep saline aquifers in the country. The CO2 storage capacity in saline formation assessment was conducted based on the method for quick assessment of CO2 storage capacity in closed, and semi-closed saline formations modified to suit the geology setting of Malaysia. Then, an integrated approach that involved geographic information systems (GIS) analysis and field data assessment was adopted to provide the potential storage sites and its capacity for CO2 sequestration. This study concentrated on the assessment of major sedimentary basins in Malaysia both onshore and offshore where potential geological formations which CO2 could be stored exist below 800 meters and where suitable sealing formations are present. Based on regional study and amount of data available, there are 14 sedimentary basins all around Malaysia that has been identified as potential CO2 storage. Meanwhile, from the screening and ranking exercises, it is obvious that Malay Basin, Central Luconia Province, West Baram Delta and Balingian Province are respectively ranked as the top four in the ranking system for CO2 storage. 27% of sedimentary basins in Malaysia were evaluated as high potential area for CO2 storage. This study should provide a basis for further work to reduce the uncertainty in these estimates and also provide support to policy makers on future planning of carbon capture and sequestration (CCS) projects in Malaysia.

Keywords: CO₂ storage, deep saline aquifer, GIS, sedimentary basin

Procedia PDF Downloads 351
17997 Hand Movements and the Effect of Using Smart Teaching Aids: Quality of Writing Styles Outcomes of Pupils with Dysgraphia

Authors: Sadeq Al Yaari, Muhammad Alkhunayn, Sajedah Al Yaari, Adham Al Yaari, Ayman Al Yaari, Montaha Al Yaari, Ayah Al Yaari, Fatehi Eissa

Abstract:

Dysgraphia is a neurological disorder of written expression that impairs writing ability and fine motor skills, resulting primarily in problems relating not only to handwriting but also to writing coherence and cohesion. We investigate the properties of smart writing technology to highlight some unique features of the effects they cause on the academic performance of pupils with dysgraphia. In Amis, dysgraphics undergo writing problems to express their ideas due to ordinary writing aids, as the default strategy. The Amis data suggests a possible connection between available writing aids and pupils’ writing improvement; therefore, texts’ expression and comprehension. A group of thirteen dysgraphic pupils were placed in a regular classroom of primary school, with twenty-one pupils being recruited in the study as a control group. To ensure validity, reliability and accountability to the research, both groups studied writing courses for two semesters, of which the first was equipped with smart writing aids while the second took place in an ordinary classroom. Two pre-tests were undertaken at the beginning of the first two semesters, and two post-tests were administered at the end of both semesters. Tests examined pupils’ ability to write coherent, cohesive and expressive texts. The dysgraphic group received the treatment of a writing course in the first semester in classes with smart technology and produced significantly greater increases in writing expression than in an ordinary classroom, and their performance was better than that of the control group in the second semester. The current study concludes that using smart teaching aids is a ‘MUST’, both for teaching and learning dysgraphia. Furthermore, it is demonstrated that for young dysgraphia, expressive tasks are more challenging than coherent and cohesive tasks. The study, therefore, supports the literature suggesting a role for smart educational aids in writing and that smart writing techniques may be an efficient addition to regular educational practices, notably in special educational institutions and speech-language therapeutic facilities. However, further research is needed to prompt the adults with dysgraphia more often than is done to the older adults without dysgraphia in order to get them to finish the other productive and/or written skills tasks.

Keywords: smart technology, writing aids, pupils with dysgraphia, hands’ movement

Procedia PDF Downloads 33
17996 The Role of Chemokine Family, CXCL-10 Urine as a Marker Diagnosis of Active Lung Tuberculosis in HIV/AIDS Patients

Authors: Dwitya Elvira, Raveinal Masri, Rohayat Bilmahdi

Abstract:

Human Immunodeficiency Virus (HIV) pandemic increased significantly worldwide. The rise in cases of HIV/AIDS was also followed by an increase in the incidence of opportunistic infection, with tuberculosis being the most opportunistic infection found in HIV/AIDS and the main cause of mortality in HIV/AIDS patients. Diagnosis of tuberculosis in HIV/AIDS patients is often difficult because of the uncommon symptom in HIV/AIDS patients compared to those without the disease. Thus, diagnostic tools are required that are more effective and efficient to diagnose tuberculosis in HIV/AIDS. CXCL-10/IP-10 is a chemokine that binds to the CXCR3 receptor found in HIV/AIDS patients with a weakened immune system. Tuberculosis infection in HIV/AIDS activates chemokine IP-10 in urine, which is used as a marker for diagnosis of infection. The aim of this study was to prove whether IP-10 urine can be a biomarker diagnosis of active lung tuberculosis in HIV-AIDS patients. Design of this study is a cross sectional study involving HIV/AIDS patients with lung tuberculosis as the subject of this study. Forty-seven HIV/AIDS patients with tuberculosis based on clinical and biochemical laboratory were asked to collect urine samples and IP-10/CXCL-10 urine being measured using ELISA method with 18 healthy human urine samples as control. Forty-seven patients diagnosed as HIV/AIDS were included as a subject of this study. HIV/AIDS were more common in male than in women with the percentage in male 85.1% vs. 14.5% of women. In this study, most diagnosed patients were aged 31-40 years old, followed by those 21-30 years, and > 40 years old, with one case diagnosed at age less than 20 years of age. From the result of the urine IP-10 using ELISA method, there was significant increase of the mean value of IP-10 urine in patients with TB-HIV/AIDS co-infection compared to the healthy control with mean 61.05 pg/mL ± 78.01 pg/mL vs. mean 17.2 pg/mL. Based on this research, there was significant increase of urine IP-10/CXCL-10 in active lung tuberculosis with HIV/AIDS compared to the healthy control. From this finding, it is necessary to conduct further research into whether urine IP-10/CXCL-10 plays a significant role in TB-HIV/AIDS co-infection, which can also be used as a biomarker in the early diagnosis of TB-HIV.

Keywords: chemokine, HIV/AIDS, IP-10 urine, tuberculosis

Procedia PDF Downloads 220
17995 Treating Voxels as Words: Word-to-Vector Methods for fMRI Meta-Analyses

Authors: Matthew Baucum

Abstract:

With the increasing popularity of fMRI as an experimental method, psychology and neuroscience can greatly benefit from advanced techniques for summarizing and synthesizing large amounts of data from brain imaging studies. One promising avenue is automated meta-analyses, in which natural language processing methods are used to identify the brain regions consistently associated with certain semantic concepts (e.g. “social”, “reward’) across large corpora of studies. This study builds on this approach by demonstrating how, in fMRI meta-analyses, individual voxels can be treated as vectors in a semantic space and evaluated for their “proximity” to terms of interest. In this technique, a low-dimensional semantic space is built from brain imaging study texts, allowing words in each text to be represented as vectors (where words that frequently appear together are near each other in the semantic space). Consequently, each voxel in a brain mask can be represented as a normalized vector sum of all of the words in the studies that showed activation in that voxel. The entire brain mask can then be visualized in terms of each voxel’s proximity to a given term of interest (e.g., “vision”, “decision making”) or collection of terms (e.g., “theory of mind”, “social”, “agent”), as measured by the cosine similarity between the voxel’s vector and the term vector (or the average of multiple term vectors). Analysis can also proceed in the opposite direction, allowing word cloud visualizations of the nearest semantic neighbors for a given brain region. This approach allows for continuous, fine-grained metrics of voxel-term associations, and relies on state-of-the-art “open vocabulary” methods that go beyond mere word-counts. An analysis of over 11,000 neuroimaging studies from an existing meta-analytic fMRI database demonstrates that this technique can be used to recover known neural bases for multiple psychological functions, suggesting this method’s utility for efficient, high-level meta-analyses of localized brain function. While automated text analytic methods are no replacement for deliberate, manual meta-analyses, they seem to show promise for the efficient aggregation of large bodies of scientific knowledge, at least on a relatively general level.

Keywords: FMRI, machine learning, meta-analysis, text analysis

Procedia PDF Downloads 440
17994 A Ku/K Band Power Amplifier for Wireless Communication and Radar Systems

Authors: Meng-Jie Hsiao, Cam Nguyen

Abstract:

Wide-band devices in Ku band (12-18 GHz) and K band (18-27 GHz) have received significant attention for high-data-rate communications and high-resolution sensing. Especially, devices operating around 24 GHz is attractive due to the 24-GHz unlicensed applications. One of the most important components in RF systems is power amplifier (PA). Various PAs have been developed in the Ku and K bands on GaAs, InP, and silicon (Si) processes. Although the PAs using GaAs or InP process could have better power handling and efficiency than those realized on Si, it is very hard to integrate the entire system on the same substrate for GaAs or InP. Si, on the other hand, facilitates single-chip systems. Hence, good PAs on Si substrate are desirable. Especially, Si-based PA having good linearity is necessary for next generation communication protocols implemented on Si. We report a 16.5 to 25.5 GHz Si-based PA having flat saturated power of 19.5 ± 1.5 dBm, output 1-dB power compression (OP1dB) of 16.5 ± 1.5 dBm, and 15-23 % power added efficiency (PAE). The PA consists of a drive amplifier, two main amplifiers, and lump-element Wilkinson power divider and combiner designed and fabricated in TowerJazz 0.18µm SiGe BiCMOS process having unity power gain frequency (fMAX) of more than 250 GHz. The PA is realized as a cascode amplifier implementing both heterojunction bipolar transistor (HBT) and n-channel metal–oxide–semiconductor field-effect transistor (NMOS) devices for gain, frequency response, and linearity consideration. Particularly, a body-floating technique is utilized for the NMOS devices to improve the voltage swing and eliminate parasitic capacitances. The developed PA has measured flat gain of 20 ± 1.5 dB across 16.5-25.5 GHz. At 24 GHz, the saturated power, OP1dB, and maximum PAE are 20.8 dBm, 18.1 dBm, and 23%, respectively. Its high performance makes it attractive for use in Ku/K-band, especially 24 GHz, communication and radar systems. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.

Keywords: power amplifiers, amplifiers, communication systems, radar systems

Procedia PDF Downloads 102
17993 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 339
17992 Short-Term Effects of an Open Monitoring Meditation on Cognitive Control and Information Processing

Authors: Sarah Ullrich, Juliane Rolle, Christian Beste, Nicole Wolff

Abstract:

Inhibition and cognitive flexibility are essential parts of executive functions in our daily lives, as they enable the avoidance of unwanted responses or selectively switch between mental processes to generate appropriate behavior. There is growing interest in improving inhibition and response selection through brief mindfulness-based meditations. Arguably, open-monitoring meditation (OMM) improves inhibitory and flexibility performance by optimizing cognitive control and information processing. Yet, the underlying neurophysiological processes have been poorly studied. Using the Simon-Go/Nogo paradigm, the present work examined the effect of a single 15-minute smartphone app-based OMM on inhibitory performance and response selection in meditation novices. We used both behavioral and neurophysiological measures (event-related potentials, ERPs) to investigate which subprocesses of response selection and inhibition are altered after OMM. The study was conducted in a randomized crossover design with N = 32 healthy adults. We thereby investigated Go and Nogo trials in the paradigm. The results show that as little as 15 minutes of OMM can improve response selection and inhibition at behavioral and neurophysiological levels. More specifically, OMM reduces the rate of false alarms, especially during Nogo trials regardless of congruency. It appears that OMM optimizes conflict processing and response inhibition compared to no meditation, also reflected in the ERP N2 and P3 time windows. The results may be explained by the meta control model, which argues in terms of a specific processing mode with increased flexibility and inclusive decision-making under OMM. Importantly, however, the effects of OMM were only evident when there was the prior experience with the task. It is likely that OMM provides more cognitive resources, as the amplitudes of these EKPs decreased. OMM novices seem to induce finer adjustments during conflict processing after familiarization with the task.

Keywords: EEG, inhibition, meditation, Simon Nogo

Procedia PDF Downloads 201
17991 Parameters Estimation of Multidimensional Possibility Distributions

Authors: Sergey Sorokin, Irina Sorokina, Alexander Yazenin

Abstract:

We present a solution to the Maxmin u/E parameters estimation problem of possibility distributions in m-dimensional case. Our method is based on geometrical approach, where minimal area enclosing ellipsoid is constructed around the sample. Also we demonstrate that one can improve results of well-known algorithms in fuzzy model identification task using Maxmin u/E parameters estimation.

Keywords: possibility distribution, parameters estimation, Maxmin u\E estimator, fuzzy model identification

Procedia PDF Downloads 464
17990 Evaluation of the Boiling Liquid Expanding Vapor Explosion Thermal Effects in Hassi R'Mel Gas Processing Plant Using Fire Dynamics Simulator

Authors: Brady Manescau, Ilyas Sellami, Khaled Chetehouna, Charles De Izarra, Rachid Nait-Said, Fati Zidani

Abstract:

During a fire in an oil and gas refinery, several thermal accidents can occur and cause serious damage to people and environment. Among these accidents, the BLEVE (Boiling Liquid Expanding Vapor Explosion) is most observed and remains a major concern for risk decision-makers. It corresponds to a violent vaporization of explosive nature following the rupture of a vessel containing a liquid at a temperature significantly higher than its normal boiling point at atmospheric pressure. Their effects on the environment generally appear in three ways: blast overpressure, radiation from the fireball if the liquid involved is flammable and fragment hazards. In order to estimate the potential damage that would be caused by such an explosion, risk decision-makers often use quantitative risk analysis (QRA). This analysis is a rigorous and advanced approach that requires a reliable data in order to obtain a good estimate and control of risks. However, in most cases, the data used in QRA are obtained from the empirical correlations. These empirical correlations generally overestimate BLEVE effects because they are based on simplifications and do not take into account real parameters like the geometry effect. Considering that these risk analyses are based on an assessment of BLEVE effects on human life and plant equipment, more precise and reliable data should be provided. From this point of view, the CFD modeling of BLEVE effects appears as a solution to the empirical law limitations. In this context, the main objective is to develop a numerical tool in order to predict BLEVE thermal effects using the CFD code FDS version 6. Simulations are carried out with a mesh size of 1 m. The fireball source is modeled as a vertical release of hot fuel in a short time. The modeling of fireball dynamics is based on a single step combustion using an EDC model coupled with the default LES turbulence model. Fireball characteristics (diameter, height, heat flux and lifetime) issued from the large scale BAM experiment are used to demonstrate the ability of FDS to simulate the various steps of the BLEVE phenomenon from ignition up to total burnout. The influence of release parameters such as the injection rate and the radiative fraction on the fireball heat flux is also presented. Predictions are very encouraging and show good agreement in comparison with BAM experiment data. In addition, a numerical study is carried out on an operational propane accumulator in an Algerian gas processing plant of SONATRACH company located in the Hassi R’Mel Gas Field (the largest gas field in Algeria).

Keywords: BLEVE effects, CFD, FDS, fireball, LES, QRA

Procedia PDF Downloads 181
17989 Membrane Bioreactor for Wastewater Treatment and Reuse

Authors: Sarra Kitanou

Abstract:

Water recycling and reuse is an effective measure to solve the water stress problem. The sustainable use of water resource has become a national development strategy in Morocco. A key aspect of improving overall sustainability is the potential for direct wastewater effluent reuse. However, the hybrid technology membrane bioreactors (MBR) have been identified as an attractive option for producing high quality and nutrient-rich effluents for wastewater treatment. It is based on complex interactions between biological processes, filtration process and rheological properties of the liquid to be treated. Currently, with the evolution of wastewater treatment projects in Morocco, the MBR technology can be used as a technology treating different types of wastewaters and to produce effluent with suitable quality for reuse. However, the energetic consumption of this process is a great concern, which can limit the development and implementation of this technology. In this investigation, the electric energy consumption of an ultrafiltration membrane bioreactor process in domestic wastewater treatment is evaluated and compared to some MBR installations based on literature review. Energy requirements of the MBR are linked to operational parameters and reactor performance. The analysis of energy consumption shows that the biological aeration and membrane filtration are more energy consuming than the other components listed as feed and recirculation pumps. Biological aeration needs 53% of the overall energetic consumption and the specific energy consumption for membrane filtration is about 25%. However, aeration is a major energy consumer, often exceeding 50% share of total energy consumption. The optimal results obtained on the MBR process (pressure p = 1.15 bar), hydraulic retention time (15 h) showed removal efficiencies up to 90% in terms of organic compounds removal, 100% in terms of suspended solids presence and up to 80% reduction of total nitrogen and total phosphorus. The effluent from this MBR system could be considered as qualified for irrigation reuse, showing its potential application in the future.

Keywords: hybrid process, membrane bioreactor, wastewater treatment, reuse

Procedia PDF Downloads 76