Search results for: grounded theory development in intermix discourses of analysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 39735

Search results for: grounded theory development in intermix discourses of analysis

675 Partisan Agenda Setting in Digital Media World

Authors: Hai L. Tran

Abstract:

Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.

Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization

Procedia PDF Downloads 33
674 Catalytic Ammonia Decomposition: Cobalt-Molybdenum Molar Ratio Effect on Hydrogen Production

Authors: Elvis Medina, Alejandro Karelovic, Romel Jiménez

Abstract:

Catalytic ammonia decomposition represents an attractive alternative due to its high H₂ content (17.8% w/w), a product stream free of COₓ, among others; however, challenges need to be addressed for its consolidation as an H₂ chemical storage technology, especially, those focused on the synthesis of efficient bimetallic catalytic systems, as an alternative to the price and scarcity of ruthenium, the most active catalyst reported. In this sense, from the perspective of rational catalyst design, adjusting the main catalytic activity descriptor, a screening of supported catalysts with different compositional settings of cobalt-molybdenum metals is presented to evaluate their effect on the catalytic decomposition rate of ammonia. Subsequently, a kinetic study on the supported monometallic Co and Mo catalysts, as well as on the bimetallic CoMo catalyst with the highest activity is shown. The synthesis of catalysts supported on γ-alumina was carried out using the Charge Enhanced Dry Impregnation (CEDI) method, all with a 5% w/w loading metal. Seeking to maintain uniform dispersion, the catalysts were oxidized and activated (In-situ activation) using a flow of anhydrous air and hydrogen, respectively, under the same conditions: 40 ml min⁻¹ and 5 °C min⁻¹ from room temperature to 600 °C. Catalytic tests were carried out in a fixed-bed reactor, confirming the absence of transport limitations, as well as an Approach to equilibrium (< 1 x 10⁻⁴). The reaction rate on all catalysts was measured between 400 and 500 ºC at 53.09 kPa NH3. The synergy theoretically (DFT) reported for bimetallic catalysts was confirmed experimentally. Specifically, it was observed that the catalyst composed mainly of 75 mol% cobalt proved to be the most active in the experiments, followed by the monometallic cobalt and molybdenum catalysts, in this order of activity as referred to in the literature. A kinetic study was performed at 10.13 – 101.32 kPa NH3 and at four equidistant temperatures between 437 and 475 °C the data were adjusted to an LHHW-type model, which considered the desorption of nitrogen atoms from the active phase surface as the rate determining step (RDS). The regression analysis were carried out under an integral regime, using a minimization algorithm based on SLSQP. The physical meaning of the parameters adjusted in the kinetic model, such as the RDS rate constant (k₅) and the lumped adsorption constant of the quasi-equilibrated steps (α) was confirmed through their Arrhenius and Van't Hoff-type behavior (R² > 0.98), respectively. From an energetic perspective, the activation energy for cobalt, cobalt-molybdenum, and molybdenum was 115.2, 106.8, and 177.5 kJ mol⁻¹, respectively. With this evidence and considering the volcano shape described by the ammonia decomposition rate in relation to the metal composition ratio, the synergistic behavior of the system is clearly observed. However, since characterizations by XRD and TEM were inconclusive, the formation of intermetallic compounds should be still verified using HRTEM-EDS. From this point onwards, our objective is to incorporate parameters into the kinetic expressions that consider both compositional and structural elements and explore how these can maximize or influence H₂ production.

Keywords: CEDI, hydrogen carrier, LHHW, RDS

Procedia PDF Downloads 20
673 Design Flood Estimation in Satluj Basin-Challenges for Sunni Dam Hydro Electric Project, Himachal Pradesh-India

Authors: Navneet Kalia, Lalit Mohan Verma, Vinay Guleria

Abstract:

Introduction: Design Flood studies are essential for effective planning and functioning of water resource projects. Design flood estimation for Sunni Dam Hydro Electric Project located in State of Himachal Pradesh, India, on the river Satluj, was a big challenge in view of the river flowing in the Himalayan region from Tibet to India, having a large catchment area of varying topography, climate, and vegetation. No Discharge data was available for the part of the river in Tibet, whereas, for India, it was available only at Khab, Rampur, and Luhri. The estimation of Design Flood using standard methods was not possible. This challenge was met using two different approaches for upper (snow-fed) and lower (rainfed) catchment using Flood Frequency Approach and Hydro-metrological approach. i) For catchment up to Khab Gauging site (Sub-Catchment, C1), Flood Frequency approach was used. Around 90% of the catchment area (46300 sqkm) up to Khab is snow-fed which lies above 4200m. In view of the predominant area being snow-fed area, 1 in 10000 years return period flood estimated using Flood Frequency analysis at Khab was considered as Probable Maximum Flood (PMF). The flood peaks were taken from daily observed discharges at Khab, which were increased by 10% to make them instantaneous. Design Flood of 4184 cumec thus obtained was considered as PMF at Khab. ii) For catchment between Khab and Sunni Dam (Sub-Catchment, C2), Hydro-metrological approach was used. This method is based upon the catchment response to the rainfall pattern observed (Probable Maximum Precipitation - PMP) in a particular catchment area. The design flood computation mainly involves the estimation of a design storm hyetograph and derivation of the catchment response function. A unit hydrograph is assumed to represent the response of the entire catchment area to a unit rainfall. The main advantage of the hydro-metrological approach is that it gives a complete flood hydrograph which allows us to make a realistic determination of its moderation effect while passing through a reservoir or a river reach. These studies were carried out to derive PMF for the catchment area between Khab and Sunni Dam site using a 1-day and 2-day PMP values of 232 and 416 cm respectively. The PMF so obtained was 12920.60 cumec. Final Result: As the Catchment area up to Sunni Dam has been divided into 2 sub-catchments, the Flood Hydrograph for the Catchment C1 has been routed through the connecting channel reach (River Satluj) using Muskingum method and accordingly, the Design Flood was computed after adding the routed flood ordinates with flood ordinates of catchment C2. The total Design Flood (i.e. 2-Day PMF) with a peak of 15473 cumec was obtained. Conclusion: Even though, several factors are relevant while deciding the method to be used for design flood estimation, data availability and the purpose of study are the most important factors. Since, generally, we cannot wait for the hydrological data of adequate quality and quantity to be available, flood estimation has to be done using whatever data is available. Depending upon the type of data available for a particular catchment, the method to be used is to be selected.

Keywords: design flood, design storm, flood frequency, PMF, PMP, unit hydrograph

Procedia PDF Downloads 302
672 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation

Authors: Bill D. Geis

Abstract:

Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.

Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death

Procedia PDF Downloads 45
671 Reducing System Delay to Definitive Care For STEMI Patients, a Simulation of Two Different Strategies in the Brugge Area, Belgium

Authors: E. Steen, B. Dewulf, N. Müller, C. Vandycke, Y. Vandekerckhove

Abstract:

Introduction: The care for a ST-elevation myocardial infarction (STEMI) patient is time-critical. Reperfusion therapy within 90 minutes of initial medical contact is mandatory in the improvement of the outcome. Primary percutaneous coronary intervention (PCI) without previous fibrinolytic treatment, is the preferred reperfusion strategy in patients with STEMI, provided it can be performed within guideline-mandated times. Aim of the study: During a one year period (January 2013 to December 2013) the files of all consecutive STEMI patients with urgent referral from non-PCI facilities for primary PCI were reviewed. Special attention was given to a subgroup of patients with prior out-of-hospital medical contact generated by the 112-system. In an effort to reduce out-of-hospital system delay to definitive care a change in pre-hospital 112 dispatch strategies is proposed for these time-critical patients. Actual time recordings were compared with travel time simulations for two suggested scenarios. A first scenario (SC1) involves the decision by the on scene ground EMS (GEMS) team to transport the out-of-hospital diagnosed STEMI patient straight forward to a PCI centre bypassing the nearest non-PCI hospital. Another strategy (SC2) explored the potential role of helicopter EMS (HEMS) where the on scene GEMS team requests a PCI-centre based HEMS team for immediate medical transfer to the PCI centre. Methods and Results: 49 (29,1% of all) STEMI patients were referred to our hospital for emergency PCI by a non-PCI facility. 1 file was excluded because of insufficient data collection. Within this analysed group of 48 secondary referrals 21 patients had an out-of-hospital medical contact generated by the 112-system. The other 27 patients presented at the referring emergency department without prior contact with the 112-system. The table below shows the actual time data from first medical contact to definitive care as well as the simulated possible gain of time for both suggested strategies. The PCI-team was always alarmed upon departure from the referring centre excluding further in-hospital delay. Time simulation tools were similar to those used by the 112-dispatch centre. Conclusion: Our data analysis confirms prolonged reperfusion times in case of secondary emergency referrals for STEMI patients even with the use of HEMS. In our setting there was no statistical difference in gain of time between the two suggested strategies, both reducing the secondary referral generated delay with about one hour and by this offering all patients PCI within the guidelines mandated time. However, immediate HEMS activation by the on scene ground EMS team for transport purposes is preferred. This ensures a faster availability of the local GEMS-team for its community. In case these options are not available and the guideline-mandated times for primary PCI are expected to be exceeded, primary fibrinolysis should be considered in a non-PCI centre.

Keywords: STEMI, system delay, HEMS, emergency medicine

Procedia PDF Downloads 302
670 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches

Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli

Abstract:

Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).

Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR

Procedia PDF Downloads 75
669 A Numerical Hybrid Finite Element Model for Lattice Structures Using 3D/Beam Elements

Authors: Ahmadali Tahmasebimoradi, Chetra Mang, Xavier Lorang

Abstract:

Thanks to the additive manufacturing process, lattice structures are replacing the traditional structures in aeronautical and automobile industries. In order to evaluate the mechanical response of the lattice structures, one has to resort to numerical techniques. Ansys is a globally well-known and trusted commercial software that allows us to model the lattice structures and analyze their mechanical responses using either solid or beam elements. In this software, a script may be used to systematically generate the lattice structures for any size. On the one hand, solid elements allow us to correctly model the contact between the substrates (the supports of the lattice structure) and the lattice structure, the local plasticity, and the junctions of the microbeams. However, their computational cost increases rapidly with the size of the lattice structure. On the other hand, although beam elements reduce the computational cost drastically, it doesn’t correctly model the contact between the lattice structures and the substrates nor the junctions of the microbeams. Also, the notion of local plasticity is not valid anymore. Moreover, the deformed shape of the lattice structure doesn’t correspond to the deformed shape of the lattice structure using 3D solid elements. In this work, motivated by the pros and cons of the 3D and beam models, a numerically hybrid model is presented for the lattice structures to reduce the computational cost of the simulations while avoiding the aforementioned drawbacks of the beam elements. This approach consists of the utilization of solid elements for the junctions and beam elements for the microbeams connecting the corresponding junctions to each other. When the global response of the structure is linear, the results from the hybrid models are in good agreement with the ones from the 3D models for body-centered cubic with z-struts (BCCZ) and body-centered cubic without z-struts (BCC) lattice structures. However, the hybrid models have difficulty to converge when the effect of large deformation and local plasticity are considerable in the BCCZ structures. Furthermore, the effect of the junction’s size of the hybrid models on the results is investigated. For BCCZ lattice structures, the results are not affected by the junction’s size. This is also valid for BCC lattice structures as long as the ratio of the junction’s size to the diameter of the microbeams is greater than 2. The hybrid model can take into account the geometric defects. As a demonstration, the point clouds of two lattice structures are parametrized in a platform called LATANA (LATtice ANAlysis) developed by IRT-SystemX. In this process, for each microbeam of the lattice structures, an ellipse is fitted to capture the effect of shape variation and roughness. Each ellipse is represented by three parameters; semi-major axis, semi-minor axis, and angle of rotation. Having the parameters of the ellipses, the lattice structures are constructed in Spaceclaim (ANSYS) using the geometrical hybrid approach. The results show a negligible discrepancy between the hybrid and 3D models, while the computational cost of the hybrid model is lower than the computational cost of the 3D model.

Keywords: additive manufacturing, Ansys, geometric defects, hybrid finite element model, lattice structure

Procedia PDF Downloads 94
668 Validating Chronic Kidney Disease-Specific Risk Factors for Cardiovascular Events Using National Data: A Retrospective Cohort Study of the Nationwide Inpatient Sample

Authors: Fidelis E. Uwumiro, Chimaobi O. Nwevo, Favour O. Osemwota, Victory O. Okpujie, Emeka S. Obi, Omamuyovbi F. Nwoagbe, Ejiroghene Tejere, Joycelyn Adjei-Mensah, Christopher N. Ekeh, Charles T. Ogbodo

Abstract:

Several risk factors associated with cardiovascular events have been identified as specific to Chronic Kidney Disease (CKD). This study endeavors to validate these CKD-specific risk factors using up-to-date national-level data, thereby highlighting the crucial significance of confirming the validity and generalizability of findings obtained from previous studies conducted on smaller patient populations. The study utilized the nationwide inpatient sample database to identify adult hospitalizations for CKD from 2016 to 2020, employing validated ICD-10-CM/PCS codes. A comprehensive literature review was conducted to identify both traditional and CKD-specific risk factors associated with cardiovascular events. Risk factors and cardiovascular events were defined using a combination of ICD-10-CM/PCS codes and statistical commands. Only risk factors with specific ICD-10 codes and hospitalizations with complete data were included in the study. Cardiovascular events of interest included cardiac arrhythmias, sudden cardiac death, acute heart failure, and acute coronary syndromes. Univariate and multivariate regression models were employed to evaluate the association between chronic kidney disease-specific risk factors and cardiovascular events while adjusting for the impact of traditional CV risk factors such as old age, hypertension, diabetes, hypercholesterolemia, inactivity, and smoking. A total of 690,375 hospitalizations for CKD were included in the analysis. The study population was predominantly male (375,564, 54.4%) and primarily received care at urban teaching hospitals (512,258, 74.2%). The mean age of the study population was 61 years (SD 0.1), and 86.7% (598,555) had a CCI of 3 or more. At least one traditional risk factor for CV events was present in 84.1% of all hospitalizations (580,605), while 65.4% (451,505) included at least one CKD-specific risk factor for CV events. The incidence of CV events in the study was as follows: acute coronary syndromes (41,422; 6%), sudden cardiac death (13,807; 2%), heart failure (404,560; 58.6%), and cardiac arrhythmias (124,267; 18%). 91.7% (113,912) of all cardiac arrhythmias were atrial fibrillations. Significant odds of cardiovascular events on multivariate analyses included: malnutrition (aOR: 1.09; 95% CI: 1.06–1.13; p<0.001), post-dialytic hypotension (aOR: 1.34; 95% CI: 1.26–1.42; p<0.001), thrombophilia (aOR: 1.46; 95% CI: 1.29–1.65; p<0.001), sleep disorder (aOR: 1.17; 95% CI: 1.09–1.25; p<0.001), and post-renal transplant immunosuppressive therapy (aOR: 1.39; 95% CI: 1.26–1.53; p<0.001). The study validated malnutrition, post-dialytic hypotension, thrombophilia, sleep disorders, and post-renal transplant immunosuppressive therapy, highlighting their association with increased risk for cardiovascular events in CKD patients. No significant association was observed between uremic syndrome, hyperhomocysteinemia, hyperuricemia, hypertriglyceridemia, leptin levels, carnitine deficiency, anemia, and the odds of experiencing cardiovascular events.

Keywords: cardiovascular events, cardiovascular risk factors in CKD, chronic kidney disease, nationwide inpatient sample

Procedia PDF Downloads 47
667 Screening of Osteoporosis in Aging Populations

Authors: Massimiliano Panella, Sara Bortoluzzi, Sophia Russotto, Daniele Nicolini, Carmela Rinaldi

Abstract:

Osteoporosis affects more than 200 million people worldwide. About 75% of osteoporosis cases are undiagnosed or diagnosed only when a bone fracture occurs. Since osteoporosis related fractures are significant determinants of the burden of disease and health and social costs of aging populations, we believe that this is the early identification and treatment of high-risk patients should be a priority in actual healthcare systems. Screening for osteoporosis by dual energy x-ray absorptiometry (DEXA) is not cost-effective for general population. An alternative is pulse-echo ultrasound (PEUS) because of the minor costs. To this end, we developed an early detection program for osteoporosis with PEUS, and we evaluated is possible impact and sustainability. We conducted a cross-sectional study including 1,050 people in Italy. Subjects with >1 major or >2 minor risk factors for osteoporosis were invited to PEUS bone mass density (BMD) measurement at the proximal tibia. Based on BMD values, subjects were classified as healthy subjects (BMD>0.783 g/cm²) and pathological including subjects with suspected osteopenia (0.783≤BMD>0.719 g/cm²) or osteoporosis (BMD ≤ 0.719 g/cm²). The responder rate was 60.4% (634/1050). According to the risk, PEUS scan was recommended to 436 people, of whom 300 (mean age 45.2, 81% women) accepted to participate. We identified 240 (80%) healthy and 60 (20%) pathological subjects (47 osteopenic and 13 osteoporotic). We observed a significant association between high risk people and reduced bone density (p=0.043) with increased risks for female gender, older ages, and menopause (p<0.01). The yearly cost of the screening program was 8,242 euros. With actual Italian fracture incidence rates in osteoporotic patients, we can reasonably expect in 20 years that at least 6 fractures will occur in our sample. If we consider that the mean costs per fracture in Italy is today 16,785 euros, we can estimate a theoretical cost of 100,710 euros. According to literature, we can assume that the early treatment of osteoporosis could avoid 24,170 euros of such costs. If we add the actual yearly cost of the treatments to the cost of our program and we compare this final amount of 11,682 euros to the avoidable costs of fractures (24,170 euros) we can measure a possible positive benefits/costs ratio of 2.07. As a major outcome, our study let us to early identify 60 people with a significant bone loss that were not aware of their condition. This diagnostic anticipation constitutes an important element of value for the project, both for the patients, for the preventable negative outcomes caused by the fractures, and for the society in general, because of the related avoidable costs. Therefore, based on our finding, we believe that the PEUS based screening performed could be a cost-effective approach to early identify osteoporosis. However, our study has some major limitations. In fact, in our study the economic analysis is based on theoretical scenarios, thus specific studies are needed for a better estimation of the possible benefits and costs of our program.

Keywords: osteoporosis, prevention, public health, screening

Procedia PDF Downloads 100
666 Digital Twins in the Built Environment: A Systematic Literature Review

Authors: Bagireanu Astrid, Bros-Williamson Julio, Duncheva Mila, Currie John

Abstract:

Digital Twins (DT) are an innovative concept of cyber-physical integration of data between an asset and its virtual replica. They have originated in established industries such as manufacturing and aviation and have garnered increasing attention as a potentially transformative technology within the built environment. With the potential to support decision-making, real-time simulations, forecasting abilities and managing operations, DT do not fall under a singular scope. This makes defining and leveraging the potential uses of DT a potential missed opportunity. Despite its recognised potential in established industries, literature on DT in the built environment remains limited. Inadequate attention has been given to the implementation of DT in construction projects, as opposed to its operational stage applications. Additionally, the absence of a standardised definition has resulted in inconsistent interpretations of DT in both industry and academia. There is a need to consolidate research to foster a unified understanding of the DT. Such consolidation is indispensable to ensure that future research is undertaken with a solid foundation. This paper aims to present a comprehensive systematic literature review on the role of DT in the built environment. To accomplish this objective, a review and thematic analysis was conducted, encompassing relevant papers from the last five years. The identified papers are categorised based on their specific areas of focus, and the content of these papers was translated into a through classification of DT. In characterising DT and the associated data processes identified, this systematic literature review has identified 6 DT opportunities specifically relevant to the built environment: Facilitating collaborative procurement methods, Supporting net-zero and decarbonization goals, Supporting Modern Methods of Construction (MMC) and off-site manufacturing (OSM), Providing increased transparency and stakeholders collaboration, Supporting complex decision making (real-time simulations and forecasting abilities) and Seamless integration with Internet of Things (IoT), data analytics and other DT. Finally, a discussion of each area of research is provided. A table of definitions of DT across the reviewed literature is provided, seeking to delineate the current state of DT implementation in the built environment context. Gaps in knowledge are identified, as well as research challenges and opportunities for further advancements in the implementation of DT within the built environment. This paper critically assesses the existing literature to identify the potential of DT applications, aiming to harness the transformative capabilities of data in the built environment. By fostering a unified comprehension of DT, this paper contributes to advancing the effective adoption and utilisation of this technology, accelerating progress towards the realisation of smart cities, decarbonisation, and other envisioned roles for DT in the construction domain.

Keywords: built environment, design, digital twins, literature review

Procedia PDF Downloads 50
665 Effect of Temperature on the Permeability and Time-Dependent Change in Thermal Volume of Bentonite Clay During the Heating-Cooling Cycle

Authors: Nilufar Chowdhury, Fereydoun Najafian Jazi, Omid Ghasemi-Fare

Abstract:

The thermal effect on soil properties induces significant variations in hydraulic conductivity, which is attributable to temperature-dependent transitions in soil properties. With the elevation of temperature, there can be a notable increase in intrinsic permeability due to the degeneration of bound water molecules into a free state facilitated by thermal energy input. Conversely, thermal consolidation may cause a reduction in intrinsic permeability as soil particles undergo densification. This thermal response of soil permeability exhibits pronounced heterogeneity across different soil types. Furthermore, this temperature-induced disruption of the bound water within clay matrices can enhance the mineral-to-mineral contact, initiating irreversible deformation within the clay structure. This indicates that when soil undergoes heating-cooling cycles, plastic strain can develop, which needs to be investigated for every soil type to understand the thermo-hydro mechanical behavior of clay properly. This research aims to study the effect of the heating-cooling cycle on the intrinsic permeability and time-dependent evaluation of thermal volume change of sodium Bentonite clay. A temperature-controlled triaxial permeameter cell is used in this study. The selected temperature is 20° C, 40° C, 40° C and 80° C. The hydraulic conductivity of Bentonite clay under 100 kPa confining stresses was measured. Hydraulic conductivity analysis was performed on a saturated sample for a void ratio e = 0.9, corresponding to a dry density of 1.2 Mg/m3. Different hydraulic gradients were applied between the top and bottom of the sample to obtain a measurable flow through the sample. The hydraulic gradient used for the experiment was 4000. The diameter and thickness of the sample are 101. 6 mm, and 25.4 mm, respectively. Both for heating and cooling, the hydraulic conductivity at each temperature is measured after the flow reaches the steady state condition to make sure the volume change due to thermal loading is stabilized. Thus, soil specimens were kept at a constant temperature during both the heating and cooling phases for at least 10-18 days to facilitate the equilibration of hydraulic transients. To assess the influence of temperature-induced volume changes of Bentonite clay, the evaluation of void ratio change during this time period has been monitored. It is observed that the intrinsic permeability increases by 30-40% during the heating cycle. The permeability during the cooling cycle is 10-12% lower compared to the permeability observed during the heating cycle at a particular temperature. This reduction in permeability implies a change in soil fabric due to the thermal effect. An initial increase followed by a rapid decrease in void ratio was observed, representing the occurrence of possible osmotic swelling phenomena followed by thermal consolidation. It has been observed that after a complete heating-cooling cycle, there is a significant change in the void ratio compared to the initial void ratio of the sample. The results obtained suggest that Bentonite clay’s microstructure can change subject to a complete heating-cooling process, which regulates macro behavior such as the permeability of Bentonite clay.

Keywords: bentonite, permeability, temperature, thermal volume change

Procedia PDF Downloads 12
664 Enhancing Students’ Academic Engagement in Mathematics through a “Concept+Language Mapping” Approach

Authors: Jodie Lee, Lorena Chan, Esther Tong

Abstract:

Hong Kong students face a unique learning environment. Starting from the 2010/2011 school year, The Education Bureau (EDB) of the Government of the Hong Kong Special Administrative Region implemented the fine-tuned Medium of Instruction (MOI) arrangements for secondary schools. Since then, secondary schools in Hong Kong have been given the flexibility to decide the most appropriate MOI arrangements for their schools and under the new academic structure for senior secondary education, particularly on the compulsory part of the mathematics curriculum. In 2019, Hong Kong Diploma of Secondary Education Examination (HKDSE), over 40% of school day candidates attempted the Mathematics Compulsory Part examination in the Chinese version while the rest took the English version. Moreover, only 14.38% of candidates sat for one of the extended Mathematics modules. This results in a serious of intricate issues to students’ learning in post-secondary education programmes. It is worth to note that when students further pursue to an higher education in Hong Kong or even oversea, they may facing substantial difficulties in transiting learning from learning mathematics in their mother tongue in Chinese-medium instruction (CMI) secondary schools to an English-medium learning environment. Some students understood the mathematics concepts were found to fail to fulfill the course requirements at college or university due to their learning experience in secondary study at CMI. They are particularly weak in comprehending the mathematics questions when they are doing their assessment or attempting the test/examination. A government funded project was conducted with the aims of providing integrated learning context and language support to students with a lower level of numeracy and/or with CMI learning experience. By introducing this “integrated concept + language mapping approach”, students can cope with the learning challenges in the compulsory English-medium mathematics and statistics subjects in their tertiary education. Ultimately, in the hope that students can enhance their mathematical ability, analytical skills, and numerical sense for their lifelong learning. The “Concept + Language Mapping “(CLM) approach was adopted and tried out in the bridging courses for students with a lower level of numeracy and/or with CMI learning experiences. At the beginning of each class, a pre-test was conducted, and class time was then devoted to introducing the concepts by CLM approach. For each concept, the key thematic items and their different semantic relations are presented using graphics and animations via the CLM approach. At the end of each class, a post-test was conducted. Quantitative data analysis was performed to study the effect on students’ learning via the CLM approach. Stakeholders' feedbacks were collected to estimate the effectiveness of the CLM approach in facilitating both content and language learning. The results based on both students’ and lecturers’ feedback indicated positive outcomes on adopting the CLM approach to enhance the mathematical ability and analytical skills of CMI students.

Keywords: mathematics, Concept+Language Mapping, level of numeracy, medium of instruction

Procedia PDF Downloads 64
663 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety

Authors: Neeti Nayak, Khalid Duri

Abstract:

Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.

Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design

Procedia PDF Downloads 86
662 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 359
661 Aspiring to Achieve a Fairer Society

Authors: Bintou Jobe

Abstract:

Background: The research is focused on the concept of equality, diversity, and inclusion (EDI) and the need to achieve equity by treating individuals according to their circumstances and needs. The research is rooted in the UK Equality Act 2010, which emphasizes the importance of equal opportunities for all individuals regardless of their background and social life. However, inequality persists in society, particularly for those from minority backgrounds who face discrimination. Research Aim: The aim of this research is to promote equality, diversity, and inclusion by encouraging the regeneration of minds and the eradication of stereotypes. The focus is on promoting good Equality, Diversity and Inclusion practices in various settings, including schools, colleges, universities, and workplaces, to create environments where every individual feels a sense of belonging. Methodology: The research utilises a literature review approach to gather information on promoting inclusivity, diversity, and inclusion. Findings: The research highlights the significance of promoting equality, diversity, and inclusion practices to ensure that individuals receive the respect and dignity they deserve. It emphasises the importance of treating individuals based on their unique circumstances and needs rather than relying on stereotypes. The research also emphasises the benefits of diversity and inclusion in enhancing innovation, creativity, and productivity. The theoretical importance of this research is to raise awareness about the importance of regenerating minds, challenging stereotypes, and promoting equality, diversity, and inclusion. The emphasis is on treating individuals based on their circumstances and needs rather than relying on generalizations. Diversity and inclusion are beneficial in different settings, as highlighted by the research. By raising awareness about the importance of mind regeneration, eradicating stereotypes, and promoting equality, diversity, and inclusion, this research makes a significant contribution to the subject area. It emphasizes the necessity of treating individuals based on their unique circumstances instead of relying on generalizations. However, the methodology could be strengthened by incorporating primary research to complement the literature review approach. Data Collection and Analysis Procedures: The research utilised a literature review approach to gather relevant information on promoting inclusivity, diversity, and inclusion. NVivo software application was used to analysed and synthesize the findings to identify themes and support the research aim and objectives. Question Addressed: This research addresses the question of how to promote inclusivity, diversity, and inclusion and reduce the prevalence of stereotypes and prejudice. It explores the need to treat individuals based on their unique circumstances and needs rather than relying on generic assumptions. Encourage individuals to adopt a more inclusive approach. Provide managers with responsibility and training that helps them understand the importance of their roles in shaping the workplace culture. Have an equality, diversity, and inclusion manager from a majority background at the senior level who can speak up for underrepresented groups and flag any issues that need addressing. Conclusion: The research emphasizes the importance of promoting equality, diversity, and inclusion practices to create a fairer society. It highlights the need to challenge stereotypes, treat individuals according to their circumstances and needs, and promote a culture of respect and dignity.

Keywords: equality, fairer society, inclusion, diversity

Procedia PDF Downloads 32
660 Examining the Influence of Firm Internal Level Factors on Performance Variations among Micro and Small Enterprises: Evidence from Tanzanian Agri-Food Processing Firms

Authors: Pulkeria Pascoe, Hawa P. Tundui, Marcia Dutra de Barcellos, Hans de Steur, Xavier Gellynck

Abstract:

A majority of Micro and Small Enterprises (MSEs) experience low or no growth. Understanding their performance remains unfinished and disjointed as there is no consensus on the factors influencing it, especially in developing countries. Using a Resource-Based View (RBV) as the theoretical background, this cross-sectional study employed four regression models to examine the influence of firm-level factors (firm-specific characteristics, firm resources, manager socio-demographic characteristics, and selected management practices) on the overall performance variations among 442 Tanzanian micro and small agri-food processing firms. Study results confirmed the RBV argument that intangible resources make a larger contribution to overall performance variations among firms than that tangible resources. Firms' tangible and intangible resources explained 34.5% of overall performance variations (intangible resources explained the overall performance variability by 19.4% compared to tangible resources, which accounted for 15.1%), ranking first in explaining the overall performance variance. Firm-specific characteristics ranked second by influencing variations in overall performance by 29.0%. Selected management practices ranked third (6.3%), while the manager's socio-demographic factors were last on the list, as they influenced the overall performance variability among firms by only 5.1%. The study also found that firms that focus on proper utilization of tangible resources (financial and physical), set targets, and undertake better working capital management practices performed higher than their counterparts (low and average performers). Furthermore, accumulation and proper utilization of intangible resources (relational, organizational, and reputational), undertaking performance monitoring practices, age of the manager, and the choice of the firm location and activity were the dominant significant factors influencing the variations among average and high performers, relative to low performers. The entrepreneurial background was a significant factor influencing variations in average and low-performing firms, indicating that entrepreneurial skills are crucial to achieving average levels of performance. Firm age, size, legal status, source of start-up capital, gender, education level, and total business experience of the manager were not statistically significant variables influencing the overall performance variations among the agri-food processors under the study. The study has identified both significant and non-significant factors influencing performance variations among low, average, and high-performing micro and small agri-food processing firms in Tanzania. Therefore, results from this study will help managers, policymakers and researchers to identify areas where more attention should be placed in order to improve overall performance of MSEs in agri-food industry.

Keywords: firm-level factors, micro and small enterprises, performance, regression analysis, resource-based-view

Procedia PDF Downloads 64
659 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients

Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado

Abstract:

Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.

Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off

Procedia PDF Downloads 170
658 Analysis of Influencing Factors on Infield-Logistics: A Survey of Different Farm Types in Germany

Authors: Michael Mederle, Heinz Bernhardt

Abstract:

The Management of machine fleets or autonomous vehicle control will considerably increase efficiency in future agricultural production. Especially entire process chains, e.g. harvesting complexes with several interacting combine harvesters, grain carts, and removal trucks, provide lots of optimization potential. Organization and pre-planning ensure to get these efficiency reserves accessible. One way to achieve this is to optimize infield path planning. Particularly autonomous machinery requires precise specifications about infield logistics to be navigated effectively and process optimized in the fields individually or in machine complexes. In the past, a lot of theoretical optimization has been done regarding infield logistics, mainly based on field geometry. However, there are reasons why farmers often do not apply the infield strategy suggested by mathematical route planning tools. To make the computational optimization more useful for farmers this study focuses on these influencing factors by expert interviews. As a result practice-oriented navigation not only to the field but also within the field will be possible. The survey study is intended to cover the entire range of German agriculture. Rural mixed farms with simple technology equipment are considered as well as large agricultural cooperatives which farm thousands of hectares using track guidance and various other electronic assistance systems. First results show that farm managers using guidance systems increasingly attune their infield-logistics on direction giving obstacles such as power lines. In consequence, they can avoid inefficient boom flippings while doing plant protection with the sprayer. Livestock farmers rather focus on the application of organic manure with its specific requirements concerning road conditions, landscape terrain or field access points. Cultivation of sugar beets makes great demands on infield patterns because of its particularities such as the row crop system or high logistics demands. Furthermore, several machines working in the same field simultaneously influence each other, regardless whether or not they are of the equal type. Specific infield strategies always are based on interactions of several different influences and decision criteria. Single working steps like tillage, seeding, plant protection or harvest mostly cannot be considered each individually. The entire production process has to be taken into consideration to detect the right infield logistics. One long-term objective of this examination is to integrate the obtained influences on infield strategies as decision criteria into an infield navigation tool. In this way, path planning will become more practical for farmers which is a basic requirement for automatic vehicle control and increasing process efficiency.

Keywords: autonomous vehicle control, infield logistics, path planning, process optimizing

Procedia PDF Downloads 208
657 Controlled Synthesis of Pt₃Sn-SnOx/C Electrocatalysts for Polymer Electrolyte Membrane Fuel Cells

Authors: Dorottya Guban, Irina Borbath, Istvan Bakos, Peter Nemeth, Andras Tompos

Abstract:

One of the greatest challenges of the implementation of polymer electrolyte membrane fuel cells (PEMFCs) is to find active and durable electrocatalysts. The cell performance is always limited by the oxygen reduction reaction (ORR) on the cathode since it is at least 6 orders of magnitude slower than the hydrogen oxidation on the anode. Therefore high loading of Pt is required. Catalyst corrosion is also more significant on the cathode, especially in case of mobile applications, where rapid changes of loading have to be tolerated. Pt-Sn bulk alloys and SnO2-decorated Pt3Sn nanostructures are among the most studied bimetallic systems for fuel cell applications. Exclusive formation of supported Sn-Pt alloy phases with different Pt/Sn ratios can be achieved by using controlled surface reactions (CSRs) between hydrogen adsorbed on Pt sites and tetraethyl tin. In this contribution our results for commercial and a home-made 20 wt.% Pt/C catalysts modified by tin anchoring via CSRs are presented. The parent Pt/C catalysts were synthesized by modified NaBH4-assisted ethylene-glycol reduction method using ethanol as a solvent, which resulted either in dispersed and highly stable Pt nanoparticles or evenly distributed raspberry-like agglomerates according to the chosen synthesis parameters. The 20 wt.% Pt/C catalysts prepared that way showed improved electrocatalytic performance in the ORR and stability in comparison to the commercial 20 wt.% Pt/C catalysts. Then, in order to obtain Sn-Pt/C catalysts with Pt/Sn= 3 ratio, the Pt/C catalysts were modified with tetraethyl tin (SnEt4) using three and five consecutive tin anchoring periods. According to in situ XPS studies in case of catalysts with highly dispersed Pt nanoparticles, pre-treatment in hydrogen even at 170°C resulted in complete reduction of the ionic tin to Sn0. No evidence of the presence of SnO2 phase was found by means of the XRD and EDS analysis. These results demonstrate that the method of CSRs is a powerful tool to create Pt-Sn bimetallic nanoparticles exclusively, without tin deposition onto the carbon support. On the contrary, the XPS results revealed that the tin-modified catalysts with raspberry-like Pt agglomerates always contained a fraction of non-reducible tin oxide. At the same time, they showed increased activity and long-term stability in the ORR than Pt/C, which was assigned to the presence of SnO2 in close proximity/contact with Pt-Sn alloy phase. It has been demonstrated that the content and dispersion of the fcc Pt3Sn phase within the electrocatalysts can be controlled by tuning the reaction conditions of CSRs. The bimetallic catalysts displayed an outstanding performance in the ORR. The preparation of a highly dispersed 20Pt/C catalyst permits to decrease the Pt content without relevant decline in the electrocatalytic performance of the catalysts.

Keywords: anode catalyst, cathode catalyst, controlled surface reactions, oxygen reduction reaction, PtSn/C electrocatalyst

Procedia PDF Downloads 209
656 Doctor-Patient Interaction in an L2: Pragmatic Study of a Nigerian Experience

Authors: Ayodele James Akinola

Abstract:

This study investigated the use of English in doctor-patient interaction in a university teaching hospital from a southwestern state in Nigeria with the aim of identifying the role of communication in an L2, patterns of communication, discourse strategies, pragmatic acts, and contexts that shape the interaction. Jacob Mey’s Pragmatic Acts notion complemented with Emanuel and Emanuel’s model of doctor-patient relationship provided the theoretical standpoint. Data comprising 7 audio-recorded doctors-patient interactions were collected from a University Hospital in Oyo state, Nigeria. Interactions involving the use of English language were purposefully selected. These were supplemented with patients’ case notes and interviews conducted with doctors. Transcription was patterned alongside modified Arminen’s notations of conversation analysis. In the study, interaction in English between doctor and patients has the preponderance of direct-translation, code-mixing and switching, Nigerianism and use of cultural worldviews to express medical experience. Irrespective of these, three patterns communication, namely the paternalistic, interpretive, and deliberative were identified. These were exhibited through varying discourse strategies. The paternalistic model reflected slightly casual conversational conventions and registers. These were achieved through the pragmemic activities of situated speech acts, psychological and physical acts, via patients’ quarrel-induced acts, controlled and managed through doctors’ shared situation knowledge. All these produced empathising, pacifying, promising and instructing practs. The patients’ practs were explaining, provoking, associating and greeting in the paternalistic model. The informative model reveals the use of adjacency pairs, formal turn-taking, precise detailing, institutional talks and dialogic strategies. Through the activities of the speech, prosody and physical acts, the practs of declaring, alerting and informing were utilised by doctors, while the patients exploited adapting, requesting and selecting practs. The negotiating conversational strategy of the deliberative model featured in the speech, prosody and physical acts. In this model, practs of suggesting, teaching, persuading and convincing were utilised by the doctors. The patients deployed the practs of questioning, demanding, considering and deciding. The contextual variables revealed that other patterns (such as phatic and informative) are also used and they coalesced in the hospital within the situational and psychological contexts. However, the paternalistic model was predominantly employed by doctors with over six years in practice, while the interpretive, informative and deliberative models were found among registrar and others below six years of medical practice. Doctors’ experience, patients’ peculiarities and shared cultural knowledge influenced doctor-patient communication in the study.

Keywords: pragmatics, communication pattern, doctor-patient interaction, Nigerian hospital situation

Procedia PDF Downloads 157
655 Challenges in the Last Mile of the Global Guinea Worm Eradication Program: A Systematic Review

Authors: Getahun Lemma

Abstract:

Introduction Guinea Worm Disease (GWD), also known as dracunculiasisis, is one of the oldest diseases in the history of mankind. Dracunculiasis is caused by a parasitic nematode, Dracunculus medinensis. Infection is acquired by drinking contaminated water with copepods containing infective Guinea Worm (GW) larvae). Almost one year after the infection, the worm usually emerges out through the skin on a lower, causing severe pain and disabilities. Although there is no effective drug or vaccine against the disease, the chain of transmission can be effectively prevented with simple and cost effective public health measures. Death due to dracunculiasis is very rare. However, it results in a wide range of physical, social and economic sequels. The disease is usually common in the rural, remote places of Sub-Saharan African countries among the marginalized societies. Currently, GWD is one of the neglected tropical diseases, which is on the verge of eradication. The global Guinea Worm Eradication Program (GWEP) was started in 1980. Since then, the program has achieved a tremendous success in reducing the global burden and number of GW case from 3.5 million to only 28 human cases at the end of 2018. However, it has recently been shown that not only humans can become infected, with a total of 1,105 animal infections have been reported at the end of 2018. Therefore, the objective of this study was to identify the existing challenges in the last mile of the GWEP in order To inform Policy makers and stakeholders on potential measures to finally achieve eradication. Method Systematic literature review on articles published from January 1, 2000 until May 30, 2019. Papers listed in Cochrane Library, Google Scholar, ProQuest PubMed and Web of Science databases were searched and reviewed. Results Twenty-five articles met inclusion criteria of the study and were selected for analysis. Hence, relevant data were extracted, grouped and descriptively analyzed. Results showed the main challenges complicating the last mile of global GWEP: 1. Unusual mode of transmission; 2. Rising animal Guinea Worm infection; 3. Suboptimal surveillance; 4. Insecurity; 5. Inaccessibility; 6. Inadequate safe water points; 7. Migration; 8. Poor case containment measures, 9. Ecological changes; and 10. New geographic foci of the disease. Conclusion This systematic review identified that most of the current challenges in the GWEP have been present since the start of the campaign. However, the recent change in epidemiological patterns and nature of GWD in the last remaining endemic countries illustrates a new twist in the global GWEP. Considering the complex nature of the current challenges, there seems to be a need for a more coordinated and multidisciplinary approach of GWD prevention and control measures in the last mile of the campaign. These new strategies would help to make history by eradicating dracunculiasis as the first ever parasitic disease.

Keywords: dracunculiasis, eradication program, guinea worm, last mile

Procedia PDF Downloads 103
654 Structure Modification of Leonurine to Improve Its Potency as Aphrodisiac

Authors: Ruslin, R. E. Kartasasmita, M. S. Wibowo, S. Ibrahim

Abstract:

An aphrodisiac is a substance contained in food or drug that can arouse sexual instinct and increase pleasure while working, these substances derived from plants, animals, and minerals. When consuming substances that have aphrodisiac activity and duration can improve the sexual instinct. The natural aphrodisiac effect can be obtained through plants, animals, and minerals. Leonurine compound has aphrodisiac activity, these compounds can be isolated from plants of Leonurus Sp, Sundanese people is known as deundereman, this plant is empirical has aphrodisiac activity and based on the isolation of active compounds from plants known to contain compounds leonurine, so that the compound is expected to have activity aphrodisiac. Leonurine compound can be isolated from plants or synthesized chemically with material dasa siringat acid. Leonurine compound can be obtained commercial and derivatives of these compounds can be synthesized in an effort to increase its activity. This study aims to obtain derivatives leonurine better aphrodisiac activity compared with the parent compound, modified the structure of the compounds in the form leonurin guanidino butyl ester group with butyl amin and bromoetanol. ArgusLab program version 4.0.1 is used to determine the binding energy, hydrogen bonds and amino acids involved in the interaction of the compound PDE5 receptor. The in vivo test leonurine compounds and derivatives as an aphrodisiac ingredients and hormone testosterone levels using 27 male rats Wistar strain and 9 female mice of the same species, ages ranged from 12 weeks rats weighing + 200 g / tail. The test animal is divided into 9 groups according to the type of compounds and the dose given. Each treatment group was orally administered 2 ml per day for 5 days. On the sixth day was observed male rat sexual behavior and taking blood from the heart to measure testosterone levels using ELISA technique. Statistical analysis was performed in this study is the ANOVA test Least Square Differences (LSD) using the program Statistical Product and Service Solutions (SPSS). Aphrodisiac efficacy of the leonurine compound and its derivatives have proven in silico and in vivo test, the in silico testing leonurine derivatives have smaller binding energy derivatives leonurine so that activity better than leonurine compounds. Testing in vivo using rats of wistar strain that better leonurine derivative of this compound shows leonurine that in silico studies in parallel with in vivo tests. Modification of the structure in the form of guanidine butyl ester group with butyl amin and bromoethanol increase compared leonurine compound for aphrodisiac activity, testosterone derivatives of compounds leonurine experienced a significant improvement especial is 1RD compounds especially at doses of 100 and 150 mg/bb. The results showed that the compound leonurine and its compounds contain aphrodisiac activity and increase the amount of testosterone in the blood. The compound test used in this study acts as a steroid precursor resulting in increased testosterone.

Keywords: aphrodisiac dysfunction erectile leonurine 1-RD 2-RD, dysfunction, erectile leonurine, 1-RD 2-RD

Procedia PDF Downloads 258
653 Verification of the Supercavitation Phenomena: Investigation of the Cavity Parameters and Drag Coefficients for Different Types of Cavitator

Authors: Sezer Kefeli, Sertaç Arslan

Abstract:

Supercavitation is a pressure dependent process which gives opportunity to eliminate the wetted surface effects on the underwater vehicle due to the differences of viscosity and velocity effects between liquid (freestream) and gas phase. Cavitation process occurs depending on rapid pressure drop or temperature rising in liquid phase. In this paper, pressure based cavitation is investigated due to the fact that is encountered in the underwater world, generally. Basically, this vapor-filled pressure based cavities are unstable and harmful for any underwater vehicle because these cavities (bubbles or voids) lead to intense shock waves while collapsing. On the other hand, supercavitation is a desired and stabilized phenomena than general pressure based cavitation. Supercavitation phenomena offers the idea of minimizing form drag, and thus supercavitating vehicles are revived. When proper circumstances are set up, which are either increasing the operating speed of the underwater vehicle or decreasing the pressure difference between free stream and artificial pressure, the continuity of the supercavitation is obtainable. There are 2 types of supercavitation to obtain stable and continuous supercavitation, and these are called as natural and artificial supercavitation. In order to generate natural supercavitation, various mechanical structures are discovered, which are called as cavitators. In literature, a lot of cavitator types are studied either experimentally or numerically on a CFD platforms with intent to observe natural supercavitation since the 1900s. In this paper, firstly, experimental results are obtained, and trend lines are generated based on supercavitation parameters in terms of cavitation number (), form drag coefficientC_D, dimensionless cavity diameter (d_m/d_c), and length (L_c/d_c). After that, natural cavitation verification studies are carried out for disk and cone shape cavitators. In addition, supercavitation parameters are numerically analyzed at different operating conditions, and CFD results are fitted into trend lines of experimental results. The aims of this paper are to generate one generally accepted drag coefficient equation for disk and cone cavitators at different cavitator half angle and investigation of the supercavitation parameters with respect to cavitation number. Moreover, 165 CFD analysis are performed at different cavitation numbers on FLUENT version 21R2. Five different cavitator types are modeled on SCDM with respect tocavitator’s half angles. After that, CFD database is generated depending on numerical results, and new trend lines are generated based on supercavitation parameters. These trend lines are compared with experimental results. Finally, the generally accepted drag coefficient equation and equations of supercavitation parameters are generated.

Keywords: cavity envelope, CFD, high speed underwater vehicles, supercavitation, supercavitating flows, supercavitation parameters, drag reduction, viscous force elimination, natural cavitation verification

Procedia PDF Downloads 113
652 Foodborne Outbreak Calendar: Application of Time Series Analysis

Authors: Ryan B. Simpson, Margaret A. Waskow, Aishwarya Venkat, Elena N. Naumova

Abstract:

The Centers for Disease Control and Prevention (CDC) estimate that 31 known foodborne pathogens cause 9.4 million cases of these illnesses annually in US. Over 90% of these illnesses are associated with exposure to Campylobacter, Cryptosporidium, Cyclospora, Listeria, Salmonella, Shigella, Shiga-Toxin Producing E.Coli (STEC), Vibrio, and Yersinia. Contaminated products contain parasites typically causing an intestinal illness manifested by diarrhea, stomach cramping, nausea, weight loss, fatigue and may result in deaths in fragile populations. Since 1998, the National Outbreak Reporting System (NORS) has allowed for routine collection of suspected and laboratory-confirmed cases of food poisoning. While retrospective analyses have revealed common pathogen-specific seasonal patterns, little is known concerning the stability of those patterns over time and whether they can be used for preventative forecasting. The objective of this study is to construct a calendar of foodborne outbreaks of nine infections based on the peak timing of outbreak incidence in the US from 1996 to 2017. Reported cases were abstracted from FoodNet for Salmonella (135115), Campylobacter (121099), Shigella (48520), Cryptosporidium (21701), STEC (18022), Yersinia (3602), Vibrio (3000), Listeria (2543), and Cyclospora (758). Monthly counts were compiled for each agent, seasonal peak timing and peak intensity were estimated, and the stability of seasonal peaks and synchronization of infections was examined. Negative Binomial harmonic regression models with the delta-method were applied to derive confidence intervals for the peak timing for each year and overall study period estimates. Preliminary results indicate that five infections continue to lead as major causes of outbreaks, exhibiting steady upward trends with annual increases in cases ranging from 2.71% (95%CI: [2.38, 3.05]) in Campylobacter, 4.78% (95%CI: [4.14, 5.41]) in Salmonella, 7.09% (95%CI: [6.38, 7.82]) in E.Coli, 7.71% (95%CI: [6.94, 8.49]) in Cryptosporidium, and 8.67% (95%CI: [7.55, 9.80]) in Vibrio. Strong synchronization of summer outbreaks were observed, caused by Campylobacter, Vibrio, E.Coli and Salmonella, peaking at 7.57 ± 0.33, 7.84 ± 0.47, 7.85 ± 0.37, and 7.82 ± 0.14 calendar months, respectively, with the serial cross-correlation ranging 0.81-0.88 (p < 0.001). Over 21 years, Listeria and Cryptosporidium peaks (8.43 ± 0.77 and 8.52 ± 0.45 months, respectively) have a tendency to arrive 1-2 weeks earlier, while Vibrio peaks (7.8 ± 0.47) delay by 2-3 weeks. These findings will be incorporated in the forecast models to predict common paths of the spread, long-term trends, and the synchronization of outbreaks across etiological agents. The predictive modeling of foodborne outbreaks should consider long-term changes in seasonal timing, spatiotemporal trends, and sources of contamination.

Keywords: foodborne outbreak, national outbreak reporting system, predictive modeling, seasonality

Procedia PDF Downloads 107
651 Effect of Energy Management Practices on Sustaining Competitive Advantage among Manufacturing Firms: A Case of Selected Manufacturers in Nairobi, Kenya

Authors: Henry Kiptum Yatich, Ronald Chepkilot, Aquilars Mutuku Kalio

Abstract:

Studies on energy management have focused on environmental conservation, reduction in production and operation expenses. However, transferring gains of energy management practices to competitive advantage is importance to manufacturers in Kenya. Success in managing competitive advantage arises out of a firm’s ability in identifying and implementing actions that can give the company an edge over its rivals. Manufacturing firms in Kenya are the highest consumers of both electricity and petroleum products. In this regard, the study posits that transfer of the gains of energy management practices to competitive advantage is imperative. The study was carried in Nairobi and its environs, which hosts the largest number of manufacturers. The study objectives were; to determine the level of implementing energy management regulations on sustaining competitive advantage, to determine the level of implementing company energy management policy on competitive advantage, to examine the level of implementing energy efficient technology on sustaining competitive advantage, and to assess the percentage energy expenditure on sustaining competitive advantage among manufacturing firms. The study adopted a survey research design, with a study population of 145,987. A sample of 384 respondents was selected randomly from 21 proportionately selected firms. Structured questionnaires were used to collect data. Data analysis was done using descriptive statistics (mean and standard deviations) and inferential statistics (correlation, regression, and T-test). Data is presented using tables and diagrams. The study found that Energy Management Regulations, Company Energy Management Policies, and Energy Expenses are significant predictors of Competitive Advantage (CA). However, Energy Efficient Technology as a component of Energy Management Practices did not have a significant relationship with Competitive Advantage. The study revealed that the level of awareness in the sector stood at 49.3%. Energy Expenses in the sector stood at an average of 10.53% of the firm’s total revenue. The study showed that gains from energy efficiency practices can be transferred to competitive strategies so as to improve firm competitiveness. The study recommends that manufacturing firms should consider energy management practices as part of its strategic agenda in assessing and reviewing their energy management practices as possible strategies for sustaining competitiveness. The government agencies such as Energy Regulatory Commission, the Ministry of Energy and Petroleum, and Kenya Association of Manufacturers should enforce the energy management regulations 2012, and with enhanced stakeholder involvement and sensitization so as promote sustenance of firm competitiveness. Government support in providing incentives and rebates for acquisition of energy efficient technologies should be pursued. From the study limitation, future experimental and longitudinal studies need to be carried out. It should be noted that energy management practices yield enormous benefits to all stakeholders and that the practice should not be considered a competitive tool but rather as a universal practice.

Keywords: energy, efficiency, management, guidelines, policy, technology, competitive advantage

Procedia PDF Downloads 362
650 Enhancing Industrial Wastewater Treatment: Efficacy and Optimization of Ultrasound-Assisted Laccase Immobilized on Magnetic Fe₃O₄ Nanoparticles

Authors: K. Verma, v. S. Moholkar

Abstract:

In developed countries, water pollution caused by industrial discharge has emerged as a significant environmental concern over the past decades. However, despite ongoing efforts, a fully effective and sustainable remediation strategy has yet to be identified. This paper describes how enzymatic and sonochemical treatments have demonstrated great promise in degrading bio-refractory pollutants. Mainly, a compelling area of interest lies in the combined technique of sono-enzymatic treatment, which has exhibited a synergistic enhancement effect surpassing that of the individual techniques. This study employed the covalent attachment method to immobilize Laccase from Trametes versicolor onto amino-functionalized magnetic Fe₃O₄ nanoparticles. To comprehensively characterize the synthesized free nanoparticles and the laccase-immobilized nanoparticles, various techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FT-IR), scanning electron microscope (SEM), vibrating sample magnetometer (VSM), and surface area through Brunauer-Emmett-Teller (BET) were employed. The size of immobilized Fe₃O₄@Laccase was found to be 60 nm, and the maximum loading of laccase was found to be 24 mg/g of nanoparticle. An investigation was conducted to study the effect of various process parameters, such as immobilized Fe₃O₄ Laccase dose, temperature, and pH, on the % Chemical oxygen demand (COD) removal as a response. The statistical design pinpointed the optimum conditions (immobilized Fe₃O₄ Laccase dose = 1.46 g/L, pH = 4.5, and temperature = 66 oC), resulting in a remarkable 65.58% COD removal within 60 minutes. An even more significant improvement (90.31% COD removal) was achieved with ultrasound-assisted enzymatic reaction utilizing a 10% duty cycle. The investigation of various kinetic models for free and immobilized laccase, such as the Haldane, Yano, and Koga, and Michaelis-Menten, showed that ultrasound application impacted the kinetic parameters Vmax and Km. Specifically, Vmax values for free and immobilized laccase were found to be 0.021 mg/L min and 0.045 mg/L min, respectively, while Km values were 147.2 mg/L for free laccase and 136.46 mg/L for immobilized laccase. The lower Km and higher Vmax for immobilized laccase indicate its enhanced affinity towards the substrate, likely due to ultrasound-induced alterations in the enzyme's confirmation and increased exposure of active sites, leading to more efficient degradation. Furthermore, the toxicity and Liquid chromatography-mass spectrometry (LC-MS) analysis revealed that after the treatment process, the wastewater exhibited 70% less toxicity than before treatment, with over 25 compounds degrading by more than 75%. At last, the prepared immobilized laccase had excellent recyclability retaining 70% activity up to 6 consecutive cycles. A straightforward manufacturing strategy and outstanding performance make the recyclable magnetic immobilized Laccase (Fe₃O₄ Laccase) an up-and-coming option for various environmental applications, particularly in water pollution control and treatment.

Keywords: kinetic, laccase enzyme, sonoenzymatic, ultrasound irradiation

Procedia PDF Downloads 41
649 Bacterial Diversity in Human Intestinal Microbiota and Correlations with Nutritional Behavior, Physiology, Xenobiotics Intake and Antimicrobial Resistance in Obese, Overweight and Eutrophic Individuals

Authors: Thais O. de Paula, Marjorie R. A. Sarmiento, Francis M. Borges, Alessandra B. Ferreira-Machado, Juliana A. Resende, Dioneia E. Cesar, Vania L. Silva, Claudio G. Diniz

Abstract:

Obesity is currently a worldwide public health threat, being considered a pandemic multifactorial disease related to the human gut microbiota (GM). Add to that GM is considered an important reservoir of antimicrobial resistance genes (ARG) and little is known on GM and ARG in obesity, considering the altered physiology and xenobiotics intake. As regional and social behavior may play important roles in GM modulation, and most of the studies are based on small sample size and various methodological approaches resulting in difficulties for data comparisons, this study was focused on the investigation of GM bacterial diversity in obese (OB), overweight (OW) and eutrophic individuals (ET) considering their nutritional, clinical and social characteristics; and comparative screening of AGR related to their physiology and xenobiotics intake. Microbial community was accessed by FISH considering phyla as a taxonomic level, and PCR-DGGE followed by dendrograms evaluation (UPGMA method) from fecal metagenome of 72 volunteers classified according to their body mass index (BMI). Nutritional, clinical, social parameters and xenobiotics intake were recorded for correlation analysis. The fecal metagenome was also used as template for PCR targeting 59 different ARG. Overall, 62% of OB were hypertensive, and 12% or 4% were, regarding the OW and ET individuals. Most of the OB were rated as low income (80%). Lower relative bacterial densities were observed in the OB compared to ET for almost all studied taxa (p < 0.05) with Firmicutes/Bacteroidetes ratio increased in the OB group. OW individuals showed a bacterial density representative of GM more likely to the OB. All the participants were clustered in 3 different groups based on the PCR-DGGE fingerprint patterns (C1, C2, C3), being OB mostly grouped in C1 (83.3%) and ET mostly grouped in C3 (50%). The cluster C2 showed to be transitional. Among 27 ARG detected, a cluster of 17 was observed in all groups suggesting a common core. In general, ARG were observed mostly within OB individuals followed by OW and ET. The ratio between ARG and bacterial groups may suggest that AGR were more related to enterobacteria. Positive correlations were observed between ARG and BMI, calories and xenobiotics intake (especially use of sweeteners). As with nutritional and clinical characteristics, our data may suggest that GM of OW individuals behave in a heterogeneous pattern, occasionally more likely to the OB or to the ET. Regardless the regional and social behaviors of our population, the methodological approaches in this study were complementary and confirmatory. The imbalance of GM over the health-disease interface in obesity is a matter of fact, but its influence in host's physiology is still to be clearly elucidated to help understanding the multifactorial etiology of obesity. Although the results are in agreement with observations that GM is altered in obesity, the altered physiology in OB individuals seems to be also associated to the increased xenobiotics intake and may interfere with GM towards antimicrobial resistance, as observed by the fecal metagenome and ARG screening. Support: FAPEMIG, CNPQ, CAPES, PPGCBIO/UFJF.

Keywords: antimicrobial resistance, bacterial diversity, gut microbiota, obesity

Procedia PDF Downloads 138
648 A Comparative Human Rights Analysis of Expulsion as a Counterterrorism Instrument: An Evaluation of Belgium

Authors: Louise Reyntjens

Abstract:

Where criminal law used to be the traditional response to cope with the terrorist threat, European governments are increasingly relying on administrative paths. The reliance on immigration law fits into this trend. Terrorism is seen as a civilization menace emanating from abroad. In this context, the expulsion of dangerous aliens, immigration law’s core task, is put forward as a key security tool. Governments all over Europe are focusing on removing dangerous individuals from their territory rather than bringing them to justice. This research reflects on the consequences for the expelled individuals’ fundamental rights. For this, the author selected four European countries for a comparative study: Belgium, France, the United Kingdom and Sweden. All these countries face similar social and security issues, igniting the recourse to immigration law as a counterterrorism tool. Yet, they adopt a very different approach on this: the United Kingdom positions itself on the repressive side of the spectrum. Sweden on the other hand, also 'securitized' its immigration policy after the recent terrorist hit in Stockholm, but remains on the tolerant side of the spectrum. Belgium and France are situated in between. This paper addresses the situation in Belgium. In 2017, the Belgian parliament introduced several legislative changes by which it considerably expanded and facilitated the possibility to expel unwanted aliens. First, the expulsion measure was subjected to new and questionably definitions: a serious attack on the nation’s safety used to be required to expel certain categories of aliens. Presently, mere suspicions suffice to fulfil the new definition of a 'serious threat to national security'. A definition which fails to respond to the principle of legality; the law, nor the prepatory works clarify what is meant by 'a threat to national security'. This creates the risk of submitting this concept’s interpretation almost entirely to the discretion of the immigration authorities. Secondly, in name of intervening more quickly and efficiently, the automatic suspensive appeal for expulsions was abolished. The European Court of Human Rights nonetheless requires such an automatic suspensive appeal under Article 13 and 3 of the Convention. Whether this procedural reform will stand to endure, is thus questionable. This contribution also raises questions regarding expulsion’s efficacy as a key security tool. In a globalized and mobilized world, particularly in a European Union with no internal boundaries, questions can be raised about the usefulness of this measure. Even more so, by simply expelling a dangerous individual, States avoid their responsibility and shift the risk to another State. Criminal law might in these instances be more capable of providing a conclusive and long term response. This contribution explores the human rights consequences of expulsion as a security tool in Belgium. It also offers a critical view on its efficacy for protecting national security.

Keywords: Belgium, counter-terrorism and human rights, expulsion, immigration law

Procedia PDF Downloads 104
647 Learners’ Preferences in Selecting Language Learning Institute (A Study in Iran)

Authors: Hoora Dehghani, Meisam Shahbazi, Reza Zare

Abstract:

During the previous decade, a significant evolution has occurred in the number of private educational centers and, accordingly, the increase in the number of providers and students of these centers around the world. The number of language teaching institutes in Iran that are considered private educational sectors is also growing exponentially as the request for learning foreign languages has extremely increased in recent years. This fact caused competition among the institutions in improving better services tailored to the students’ demands to win the competition. Along with the growth in the industry of education, higher education institutes should apply the marketing-related concepts and view students as customers because students’ outlooks are similar to consumers with education. Studying the influential factors in the selection of an institute has multiple benefits. Firstly, it acknowledges the institutions of the students’ choice factors. Secondly, the institutions use the obtained information to improve their marketing methods. It also helps institutions know students’ outlooks that can be applied to expand the student know-how. Moreover, it provides practical evidence for educational centers to plan useful amenities and programs, and use efficient policies to cater to the market, and also helps them execute the methods that increase students’ feeling of contentment and assurance. Thus, this study explored the influencing factors in the selection of a language learning institute by language learners and examined and compared the importance among the varying age groups and genders. In the first phase of the study, the researchers selected 15 language learners as representative cases within the specified age ranges and genders purposefully and interviewed them to explore the comprising elements in their language institute selection process and analyzed the results qualitatively. In the second phase, the researchers identified elements as specified items of a questionnaire, and 1000 English learners across varying educational contexts rated them. The TOPSIS method was used to analyze the data quantitatively by representing the level of importance of the items for the participants generally and specifically in each subcategory; genders and age groups. The results indicated that the educational quality, teaching method, duration of training course, establishing need-oriented courses, and easy access were the most important elements. On the other hand, offering training in different languages, the specialized education of only one language, the uniform and appropriate appearance of office staff, having native professors to the language of instruction, applying Computer or online tests instead of the usual paper tests respectively as the least important choice factors in selecting a language institute. Besides, some comparisons among different groups’ ratings of choice factors were made, which revealed the differences among different groups' priorities in choosing a language institute.

Keywords: choice factors, EFL institute selection, english learning, need analysis, TOPSIS

Procedia PDF Downloads 134
646 Knowledge, Attitude, and Practices of Nurses on the Pain Assessment and Management in Level 3 Hospitals in Manila

Authors: Florence Roselle Adalin, Misha Louise Delariarte, Fabbette Laire Lagas, Sarah Emanuelle Mejia, Lika Mizukoshi, Irish Paullen Palomeno, Gibrianne Alistaire Ramos, Danica Pauline Ramos, Josefina Tuazon, Jo Leah Flores

Abstract:

Pain, often a missed and undertreated symptom, affects the quality of life of individuals. Nurses are key players in providing effective pain management to decrease morbidity and mortality of patients in pain. Nurses’ knowledge and attitude on pain greatly affect their ability on assessment and management. The Pain Society of the Philippines recognized the inadequacy and inaccessibility of data on the knowledge, skills, and attitude of nurses on pain management in the country. This study may be the first of its kind in the county, giving it the potential to contribute greatly to nursing education and practice through providing valuable baseline data. Objectives: This study aims to describe the level of knowledge and attitude, and current practices of nurses on pain assessment and management; and determine the relationship of nurses’ knowledge and attitude with years of experience, training on pain management and clinical area of practice. Methodology: A survey research design was employed. Four hospitals were selected through purposive sampling. A total of 235 Medical-Surgical Unit and Intensive Care Unit (ICU) nurses participated in the study. The tool used is a combination of demographic survey, Nurses’ Knowledge and Attitude Survey Regarding Pain (NKASRP), Acute Pain Evidence Based Practice Questionnaire (APEBPQ) with self-report questions on non-pharmacologic pain management. The data obtained was analysed using descriptive statistics, two sample T-tests for clinical areas and training; and Pearson product correlation to identify relationship of level of knowledge and attitude with years of experience. Results and Analysis: The mean knowledge and attitude score of the nurses was 47.14%. Majority answered ‘most of the time’ or ‘all the time’ on 84.12% of practice items on pain assessment, implementation of non-pharmacologic interventions, evaluation and documentation. Three of 19 practice items describing morphine and opioid administration in special populations were only done ‘a little of the time’. Most utilized non-pharmacologic interventions were deep breathing exercises (79.66%), massage therapy (27.54%), and ice therapy (26.69%). There was no significant relationship between knowledge scores and years of clinical experience (p = 0.05, r= -0.09). Moreover, there was not enough evidence to show difference in nurses’ knowledge and attitude scores in relation to presence of training (p = 0.41) or areas (Medical-Surgical or ICU) of clinical practice (p = 0.53). Conclusion and Recommendations: Findings of the study showed that the level of knowledge and attitude of nurses on pain assessment and management is suboptimal; and no relationship between nurses’ knowledge and attitude and years of experience. It is recommended that further studies look into the nursing curriculum on pain education, culture-specific pain management protocols and evidence-based practices in the country.

Keywords: knowledge and attitude, nurses, pain management, practices on pain management

Procedia PDF Downloads 325