Search results for: supplier evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6410

Search results for: supplier evaluation

500 Inherent Difficulties in Countering Islamophobia

Authors: Imbesat Daudi

Abstract:

Islamophobia, which is a billion-dollar industry, is widespread, especially in the United States, Europe, India, Israel, and countries that have Muslim minorities at odds with their governmental policies. Hatred of Islam in the West did not evolve spontaneously; it was methodically created. Islamophobia's current format has been designed to spread on its own, find a space in the Western psyche, and resist its eradication. Hatred has been sustained by neoconservative ideologues and their allies, which are supported by the mainstream media. Social scientists have evaluated how ideas spread, why any idea can go viral, and where new ideas find space in our brains. This was possible because of the advances in the computational power of software and computers. Spreading of ideas, including Islamophobia, follows a sine curve; it has three phases: An initial exploratory phase with a long lag period, an explosive phase if ideas go viral, and the final phase when ideas find space in the human psyche. In the initial phase, the ideas are quickly examined in a center in the prefrontal lobe. When it is deemed relevant, it is sent for evaluation to another center of the prefrontal lobe; there, it is critically examined. Once it takes a final shape, the idea is sent as a final product to a center in the occipital lobe. This center cannot critically evaluate ideas; it can only defend them from its critics. Counterarguments, no matter how scientific, are automatically rejected. Therefore, arguments that could be highly effective in the early phases are counterproductive once they are stored in the occipital lobe. Anti-Islamophobic intellectuals have done a very good job of countering Islamophobic arguments. However, they have not been as effective as neoconservative ideologues who have promoted anti-Muslim rhetoric that was based on half-truths, misinformation, or outright lies. The failure is partly due to the support pro-war activists receive from the mainstream media, state institutions, mega-corporations engaged in violent conflicts, and think tanks that provide Islamophobic arguments. However, there are also scientific reasons why anti-Islamophobic thinkers have been less effective. There are different dynamics of spreading ideas once they are stored in the occipital lobe. The human brain is incapable of evaluating further once it accepts ideas as its own; therefore, a different strategy is required to be effective. This paper examines 1) why anti-Islamophobic intellectuals have failed in changing the minds of non-Muslims and 2) the steps of countering hatred. Simply put, a new strategy is needed that can effectively counteract hatred of Islam and Muslims. Islamophobia is a disease that requires strong measures. Fighting hatred is always a challenge, but if we understand why Islamophobia is taking root in the twenty-first century, one can succeed in challenging Islamophobic arguments. That will need a coordinated effort of Intellectuals, writers and the media.

Keywords: islamophobia, Islam and violence, anti-islamophobia, demonization of Islam

Procedia PDF Downloads 29
499 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 147
498 Bio-Medical Equipment Technicians: Crucial Workforce to Improve Quality of Health Services in Rural Remote Hospitals in Nepal

Authors: C. M. Sapkota, B. P. Sapkota

Abstract:

Background: Continuous developments in science and technology are increasing the availability of thousands of medical devices – all of which should be of good quality and used appropriately to address global health challenges. It is obvious that bio medical devices are becoming ever more indispensable in health service delivery and among the key workforce responsible for their design, development, regulation, evaluation and training in their use: biomedical technician (BMET) is the crucial. As a pivotal member of health workforce, biomedical technicians are an essential component of the quality health service delivery mechanism supporting the attainment of the Sustainable Development Goals. Methods: The study was based on cross sectional descriptive design. Indicators measuring the quality of health services were assessed in Mechi Zonal Hospital (MZH) and Sagarmatha Zonal Hospital (SZH). Indicators were calculated based on the data about hospital utilization and performance of 2018 available in Medical record section of both hospitals. MZH had employed the BMET during 2018 but SZH had no BMET in 2018.Focus Group Discussion with health workers in both hospitals was conducted to validate the hospital records. Client exit interview was conducted to assess the level of client satisfaction in both the hospitals. Results: In MZH there was round the clock availability and utilization of Radio diagnostics equipment, Laboratory equipment. Operation Theater was functional throughout the year. Bed Occupancy rate in MZH was 97% but in SZH it was only 63%.In SZH, OT was functional only 54% of the days in 2018. CT scan machine was just installed but not functional. Computerized X-Ray in SZH was functional only in 72% of the days. Level of client satisfaction was 87% in MZH but was just 43% in SZH. MZH performed all (256) the Caesarean Sections but SZH performed only 36% of 210 Caesarean Sections in 2018. In annual performance ranking of Government Hospitals, MZH was placed in 1st rank while as SZH was placed in 19th rank out of 32 referral hospitals nationwide in 2018. Conclusion: Biomedical technicians are the crucial member of the human resource for health team with the pivotal role. Trained and qualified BMET professionals are required within health-care systems in order to design, evaluate, regulate, acquire, maintain, manage and train on safe medical technologies. Applying knowledge of engineering and technology to health-care systems to ensure availability, affordability, accessibility, acceptability and utilization of the safer, higher quality, effective, appropriate and socially acceptable bio medical technology to populations for preventive, promotive, curative, rehabilitative and palliative care across all levels of the health service delivery.

Keywords: biomedical equipment technicians, BMET, human resources for health, HRH, quality health service, rural hospitals

Procedia PDF Downloads 108
497 A Public Health Perspective on Deradicalisation: Re-Conceptualising Deradicalisation Approaches

Authors: Erin Lawlor

Abstract:

In 2008 Time magazine named terrorist rehabilitation as one of the best ideas of the year. The term deradicalisation has become synonymous with rehabilitation within security discourse. The allure for a “quick fix” when managing terrorist populations (particularly within prisons) has led to a focus on prescriptive programmes where there is a distinct lack of exploration into the drivers for a person to disengage or deradicalise from violence. It has been argued that to tackle a snowballing issue that interventions have moved too quickly for both theory development and methodological structure. This overly quick acceptance of a term that lacks rigorous testing, measuring, and monitoring means that there is distinct lack of evidence base for deradicalisation being a genuine process/phenomenon, leading to academics retrospectively attempting to design frameworks and interventions around a concept that is not truly understood. The UK Home Office has openly acknowledged the lack of empirical data on this subject. This lack of evidence has a direct impact on policy and intervention development. Extremism and deradicalisation are issues that affect public health outcomes on a global scale, to the point that terrorism has now been added to the list of causes of trauma, both in the direct form of being victim of an attack but also the indirect context of witnesses, children and ordinary citizens who live in daily fear. This study critiques current deradicalisation discourses to establish whether public health approaches offer opportunities for development. The research begins by exploring the theoretical constructs of both what deradicalisation, and public health issues are. Questioning: What does deradicalisation involve? Is there an evidential base on which deradicalisation theory has established itself? What theory are public health interventions devised from? What does success look like in both fields? From establishing this base, current deradicalisation practices will then be explored through examples of work already being carried out. Critiques can be broken into discussion points of: Language, the difficulties with conducting empirical studies and the issues around outcome measurements that deradicalisation interventions face. This study argues that a public health approach towards deradicalisation offers the opportunity to attempt to bring clarity to the definitions of radicalisation, identify what could be modified through intervention and offer insights into the evaluation of interventions. As opposed to simply focusing on an element of deradicalisation and analysing that in isolation, a public health approach allows for what the literature has pointed out is missing, a comprehensive analysis of current interventions and information on creating efficacy monitoring systems. Interventions, policies, guidance, and practices in both the UK and Australia will be compared and contrasted, due to the joint nature of this research between Sheffield Hallam University and La Trobe, Melbourne.

Keywords: radicalisation, deradicalisation, violent extremism, public health

Procedia PDF Downloads 48
496 MBES-CARIS Data Validation for the Bathymetric Mapping of Shallow Water in the Kingdom of Bahrain on the Arabian Gulf

Authors: Abderrazak Bannari, Ghadeer Kadhem

Abstract:

The objectives of this paper are the validation and the evaluation of MBES-CARIS BASE surface data performance for bathymetric mapping of shallow water in the Kingdom of Bahrain. The latter is an archipelago with a total land area of about 765.30 km², approximately 126 km of coastline and 8,000 km² of marine area, located in the Arabian Gulf, east of Saudi Arabia and west of Qatar (26° 00’ N, 50° 33’ E). To achieve our objectives, bathymetric attributed grid files (X, Y, and depth) generated from the coverage of ship-track MBSE data with 300 x 300 m cells, processed with CARIS-HIPS, were downloaded from the General Bathymetric Chart of the Oceans (GEBCO). Then, brought into ArcGIS and converted into a raster format following five steps: Exportation of GEBCO BASE surface data to the ASCII file; conversion of ASCII file to a points shape file; extraction of the area points covering the water boundary of the Kingdom of Bahrain and multiplying the depth values by -1 to get the negative values. Then, the simple Kriging method was used in ArcMap environment to generate a new raster bathymetric grid surface of 30×30 m cells, which was the basis of the subsequent analysis. Finally, for validation purposes, 2200 bathymetric points were extracted from a medium scale nautical map (1:100 000) considering different depths over the Bahrain national water boundary. The nautical map was scanned, georeferenced and overlaid on the MBES-CARIS generated raster bathymetric grid surface (step 5 above), and then homologous depth points were selected. Statistical analysis, expressed as a linear error at the 95% confidence level, showed a strong correlation coefficient (R² = 0.96) and a low RMSE (± 0.57 m) between the nautical map and derived MBSE-CARIS depths if we consider only the shallow areas with depths of less than 10 m (about 800 validation points). When we consider only deeper areas (> 10 m) the correlation coefficient is equal to 0.73 and the RMSE is equal to ± 2.43 m while if we consider the totality of 2200 validation points including all depths, the correlation coefficient is still significant (R² = 0.81) with satisfactory RMSE (± 1.57 m). Certainly, this significant variation can be caused by the MBSE that did not completely cover the bottom in several of the deeper pockmarks because of the rapid change in depth. In addition, steep slopes and the rough seafloor probably affect the acquired MBSE raw data. In addition, the interpolation of missed area values between MBSE acquisition swaths-lines (ship-tracked sounding data) may not reflect the true depths of these missed areas. However, globally the results of the MBES-CARIS data are very appropriate for bathymetric mapping of shallow water areas.

Keywords: bathymetry mapping, multibeam echosounder systems, CARIS-HIPS, shallow water

Procedia PDF Downloads 361
495 Repair of Thermoplastic Composites for Structural Applications

Authors: Philippe Castaing, Thomas Jollivet

Abstract:

As a result of their advantages, i.e. recyclability, weld-ability, environmental compatibility, long (continuous) fiber thermoplastic composites (LFTPC) are increasingly used in many industrial sectors (mainly automotive and aeronautic) for structural applications. Indeed, in the next ten years, the environmental rules will put the pressure on the use of new structural materials like composites. In aerospace, more than 50% of the damage are due to stress impact and 85% of damage are repaired on the fuselage (fuselage skin panels and around doors). With the arrival of airplanes mainly of composite materials, replacement of sections or panels seems difficult economically speaking and repair becomes essential. The objective of the present study is to propose a solution of repair to prevent the replacement the damaged part in thermoplastic composites in order to recover the initial mechanical properties. The classification of impact damage is not so not easy : talking about low energy impact (less than 35 J) can be totally wrong when high speed or weak thicknesses as well as thermoplastic resins are considered. Crash and perforation with higher energy create important damages and the structures are replaced without repairing, so we just consider here damages due to impacts at low energy that are as follows for laminates : − Transverse cracking; − Delamination; − Fiber rupture. At low energy, the damages are barely visible but can nevertheless reduce significantly the mechanical strength of the part due to resin cracks while few fiber rupture is observed. The patch repair solution remains the standard one but may lead to the rupture of fibers and consequently creates more damages. That is the reason why we investigate the repair of thermoplastic composites impacted at low energy. Indeed, thermoplastic resins are interesting as they absorb impact energy through plastic strain. The methodology is as follows: - impact tests at low energy on thermoplastic composites; - identification of the damage by micrographic observations; - evaluation of the harmfulness of the damage; - repair by reconsolidation according to the extent of the damage ; -validation of the repair by mechanical characterization (compression). In this study, the impacts tests are performed at various levels of energy on thermoplastic composites (PA/C, PEEK/C and PPS/C woven 50/50 and unidirectional) to determine the level of impact energy creating damages in the resin without fiber rupture. We identify the extent of the damage by US inspection and micrographic observations in the plane part thickness. The samples were in addition characterized in compression to evaluate the loss of mechanical properties. Then the strategy of repair consists in reconsolidating the damaged parts by thermoforming, and after reconsolidation the laminates are characterized in compression for validation. To conclude, the study demonstrates the feasibility of the repair for low energy impact on thermoplastic composites as the samples recover their properties. At a first step of the study, the “repair” is made by reconsolidation on a thermoforming press but we could imagine a process in situ to reconsolidate the damaged parts.

Keywords: aerospace, automotive, composites, compression, damages, repair, structural applications, thermoplastic

Procedia PDF Downloads 281
494 Simulating an Interprofessional Hospital Day Shift: A Student Interprofessional (IP) Collaborative Learning Activity

Authors: Fiona Jensen, Barb Goodwin, Nancy Kleiman, Rhonda Usunier

Abstract:

Background: Clinical simulation is now a common component in many health profession curricula in preparation for clinical practice. In the Rady Faculty of Health Sciences (RFHS) college leads in simulation and interprofessional (IP) education, planned an eight hour simulated hospital day shift, where seventy students from six health professions across two campuses, learned with each other in a safe, realistic environment. Learning about interprofessional collaboration, an expected competency for many health professions upon graduation, was a primary focus of the simulation event. Method: Faculty representatives from the Colleges of Nursing, Medicine, Pharmacy and Rehabilitation Sciences (Physical Therapy, Occupation Therapy, Respiratory Therapy) and Pharmacy worked together to plan the IP event in a simulation facility in the College of Nursing. Each college provided a faculty mentor to guide the same profession students. Students were placed in interprofessional teams consisting of a nurse, physician, pharmacist, and then sharing respiratory, occupational, and physical therapists across the team depending on the needs of the patients. Eight patient scenarios were role played by health profession students, who had been provided with their patient’s story shortly before the event. Each team was guided by a facilitator. Results and Outcomes: On the morning of the event, all students gathered in a large group to meet mentors and facilitators and have a brief overview of the six competencies for effective collaboration and the session objectives. The students assuming their same profession roles were provided with their patient’s chart at the beginning of the shift, met with their team, and then completed professional specific assessments. Shortly into the shift, IP team rounds began, facilitated by the team facilitator. During the shift, each patient role-played a spontaneous health incident, which required collaboration between the IP team members for assessment and management. The afternoon concluded with team rounds, a collaborative management plan, and a facilitated de-brief. Conclusions: During the de-brief sessions, students responded to set questions related to the session learning objectives and expressed many positive learning moments. We believe that we have a sustainable simulation IP collaborative learning opportunity, which can be embedded into curricula, and has the capacity to grow to include more health profession faculties and students. Opportunities are being explored in the RFHS at the administrative level, to offer this event more frequently in the academic year to reach more students. In addition, a formally structured event evaluation tool would provide important feedback and inform the qualitative feedback to event organizers and the colleges about the significance of the simulation event to student learning.

Keywords: simulation, collaboration, teams, interprofessional

Procedia PDF Downloads 109
493 Determinants of Quality of Life in Patients with Atypical Prarkinsonian Syndromes: 1-Year Follow-Up Study

Authors: Tatjana Pekmezovic, Milica Jecmenica-Lukic, Igor Petrovic, Vladimir Kostic

Abstract:

Background: A group of atypical parkinsonian syndromes (APS) includes a variety of rare neurodegenerative disorders characterized by reduced life expectancy, increasing disability, and considerable impact on health-related quality of life (HRQoL). Aim: In this study we wanted to answer two questions: a) which demographic and clinical factors are main contributors of HRQoL in our cohort of patients with APS, and b) how does quality of life of these patients change over 1-year follow-up period. Patients and Methods: We conducted a prospective cohort study in hospital settings. The initial study comprised all consecutive patients who were referred to the Department of Movement Disorders, Clinic of Neurology, Clinical Centre of Serbia, Faculty of Medicine, University of Belgrade (Serbia), from January 31, 2000 to July 31, 2013, with the initial diagnoses of ‘Parkinson’s disease’, ‘parkinsonism’, ‘atypical parkinsonism’ and ‘parkinsonism plus’ during the first 8 months from the appearance of first symptom(s). The patients were afterwards regularly followed in 4-6 month intervals and eventually the diagnoses were established for 46 patients fulfilling the criteria for clinically probable progressive supranuclear palsy (PSP) and 36 patients for probable multiple system atrophy (MSA). The health-related quality of life was assessed by using the SF-36 questionnaire (Serbian translation). Hierarchical multiple regression analysis was conducted to identify predictors of composite scores of SF-36. The importance of changes in quality of life scores of patients with APS between baseline and follow-up time-point were quantified using Wilcoxon Signed Ranks Test. The magnitude of any differences for the quality of life changes was calculated as an effect size (ES). Results: The final models of hierarchical regression analysis showed that apathy measured by the Apathy evaluation scale (AES) score accounted for 59% of the variance in the Physical Health Composite Score of SF-36 and 14% of the variance in the Mental Health Composite Score of SF-36 (p<0.01). The changes in HRQoL were assessed in 52 patients with APS who completed 1-year follow-up period. The analysis of magnitude for changes in HRQoL during one-year follow-up period have shown sustained medium ES (0.50-0.79) for both Physical and Mental health composite scores, total quality of life as well as for the Physical Health, Vitality, Role Emotional and Social Functioning. Conclusion: This study provides insight into new potential predictors of HRQoL and its changes over time in patients with APS. Additionally, identification of both prognostic markers of a poor HRQoL and magnitude of its changes should be considered when developing comprehensive treatment-related strategies and health care programs aimed at improving HRQoL and well-being in patients with APS.

Keywords: atypical parkinsonian syndromes, follow-up study, quality of life, APS

Procedia PDF Downloads 283
492 Estimating the Efficiency of a Meta-Cognitive Intervention Program to Reduce the Risk Factors of Teenage Drivers with Attention Deficit Hyperactivity Disorder While Driving

Authors: Navah Z. Ratzon, Talia Glick, Iris Manor

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is a chronic disorder that affects the sufferer’s functioning throughout life and in various spheres of activity, including driving. Difficulties in cognitive functioning and executive functions are often part and parcel of the ADHD diagnosis, and thus form a risk factor in driving. Studies examining the effectiveness of intervention programs for improving and rehabilitating driving in typical teenagers have been conducted in relatively small numbers; while studies on similar programs for teenagers with ADHD have been especially scarce. The aim of the present study has been to examine the effectiveness of a metacognitive occupational therapy intervention program for reducing risk factors in driving among teenagers with ADHD. The present study included 37 teenagers aged 17 to 19. They included 23 teenagers with ADHD divided into experimental (11) and control (12) groups; as well as 14 non-ADHD teenagers forming a second control group. All teenagers taking part in the study were examined in the Tel Aviv University driving lab, and underwent cognitive diagnoses and a driving simulator test. Every subject in the intervention group took part in 3 assessment meetings, and two metacognitive treatment meetings. The control groups took part in two assessment meetings with a follow-up meeting 3 months later. In all the study’s groups, the treatment’s effectiveness was tested by comparing monitoring results on the driving simulator at the first and second evaluations. In addition, the driving of 5 subjects from the intervention group was monitored continuously from a month prior to the start of the intervention, a month during the phase of the intervention and another month until the end of the intervention. In the ADHD control group, the driving of 4 subjects was monitored from the end of the first evaluation for a period of 3 months. The study’s findings were affected by the fact that the ADHD control group was different from the two other groups, and exhibited ADHD characteristics manifested by impaired executive functions and lower metacognitive abilities relative to their peers. The study found partial, moderate, non-significant correlations between driving skills and cognitive functions, executive functions, and perceptions and attitudes towards driving. According to the driving simulator test results and the limited sampling results of actual driving, it was found that a metacognitive occupational therapy intervention may be effective in reducing risk factors in driving among teenagers with ADHD relative to their peers with and without ADHD. In summary, the results of the present study indicate a positive direction that speaks to the viability of using a metacognitive occupational therapy intervention program for reducing risk factors in driving. A further study is required that will include a bigger number of subjects, add actual driving monitoring hours, and assign subjects randomly to the various groups.

Keywords: ADHD, driving, driving monitoring, metacognitive intervention, occupational therapy, simulator, teenagers

Procedia PDF Downloads 284
491 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 83
490 Boussinesq Model for Dam-Break Flow Analysis

Authors: Najibullah M, Soumendra Nath Kuiry

Abstract:

Dams and reservoirs are perceived for their estimable alms to irrigation, water supply, flood control, electricity generation, etc. which civilize the prosperity and wealth of society across the world. Meantime the dam breach could cause devastating flood that can threat to the human lives and properties. Failures of large dams remain fortunately very seldom events. Nevertheless, a number of occurrences have been recorded in the world, corresponding in an average to one to two failures worldwide every year. Some of those accidents have caused catastrophic consequences. So it is decisive to predict the dam break flow for emergency planning and preparedness, as it poses high risk to life and property. To mitigate the adverse impact of dam break, modeling is necessary to gain a good understanding of the temporal and spatial evolution of the dam-break floods. This study will mainly deal with one-dimensional (1D) dam break modeling. Less commonly used in the hydraulic research community, another possible option for modeling the rapidly varied dam-break flows is the extended Boussinesq equations (BEs), which can describe the dynamics of short waves with a reasonable accuracy. Unlike the Shallow Water Equations (SWEs), the BEs taken into account the wave dispersion and non-hydrostatic pressure distribution. To capture the dam-break oscillations accurately it is very much needed of at least fourth-order accurate numerical scheme to discretize the third-order dispersion terms present in the extended BEs. The scope of this work is therefore to develop an 1D fourth-order accurate in both space and time Boussinesq model for dam-break flow analysis by using finite-volume / finite difference scheme. The spatial discretization of the flux and dispersion terms achieved through a combination of finite-volume and finite difference approximations. The flux term, was solved using a finite-volume discretization whereas the bed source and dispersion term, were discretized using centered finite-difference scheme. Time integration achieved in two stages, namely the third-order Adams Basforth predictor stage and the fourth-order Adams Moulton corrector stage. Implementation of the 1D Boussinesq model done using PYTHON 2.7.5. Evaluation of the performance of the developed model predicted as compared with the volume of fluid (VOF) based commercial model ANSYS-CFX. The developed model is used to analyze the risk of cascading dam failures similar to the Panshet dam failure in 1961 that took place in Pune, India. Nevertheless, this model can be used to predict wave overtopping accurately compared to shallow water models for designing coastal protection structures.

Keywords: Boussinesq equation, Coastal protection, Dam-break flow, One-dimensional model

Procedia PDF Downloads 217
489 Adapting Cyber Physical Production Systems to Small and Mid-Size Manufacturing Companies

Authors: Yohannes Haile, Dipo Onipede, Jr., Omar Ashour

Abstract:

The main thrust of our research is to determine Industry 4.0 readiness of small and mid-size manufacturing companies in our region and assist them to implement Cyber Physical Production System (CPPS) capabilities. Adopting CPPS capabilities will help organizations realize improved quality, order delivery, throughput, new value creation, and reduced idle time of machines and work centers of their manufacturing operations. The key metrics for the assessment include the level of intelligence, internal and external connections, responsiveness to internal and external environmental changes, capabilities for customization of products with reference to cost, level of additive manufacturing, automation, and robotics integration, and capabilities to manufacture hybrid products in the near term, where near term is defined as 0 to 18 months. In our initial evaluation of several manufacturing firms which are profitable and successful in what they do, we found low level of Physical-Digital-Physical (PDP) loop in their manufacturing operations, whereas 100% of the firms included in this research have specialized manufacturing core competencies that have differentiated them from their competitors. The level of automation and robotics integration is low to medium range, where low is defined as less than 30%, and medium is defined as 30 to 70% of manufacturing operation to include automation and robotics. However, there is a significant drive to include these capabilities at the present time. As it pertains to intelligence and connection of manufacturing systems, it is observed to be low with significant variance in tying manufacturing operations management to Enterprise Resource Planning (ERP). Furthermore, it is observed that the integration of additive manufacturing in general, 3D printing, in particular, to be low, but with significant upside of integrating it in their manufacturing operations in the near future. To hasten the readiness of the local and regional manufacturing companies to Industry 4.0 and transitions towards CPPS capabilities, our working group (ADMAR Working Group) in partnership with our university have been engaged with the local and regional manufacturing companies. The goal is to increase awareness, share know-how and capabilities, initiate joint projects, and investigate the possibility of establishing the Center for Cyber Physical Production Systems Innovation (C2P2SI). The center is intended to support the local and regional university-industry research of implementing intelligent factories, enhance new value creation through disruptive innovations, the development of hybrid and data enhanced products, and the creation of digital manufacturing enterprises. All these efforts will enhance local and regional economic development and educate students that have well developed knowledge and applications of cyber physical manufacturing systems and Industry 4.0.

Keywords: automation, cyber-physical production system, digital manufacturing enterprises, disruptive innovation, new value creation, physical-digital-physical loop

Procedia PDF Downloads 116
488 Fabrication of SnO₂ Nanotube Arrays for Enhanced Gas Sensing Properties

Authors: Hsyi-En Cheng, Ying-Yi Liou

Abstract:

Metal-oxide semiconductor (MOS) gas sensors are widely used in the gas-detection market due to their high sensitivity, fast response, and simple device structures. However, the high working temperature of MOS gas sensors makes them difficult to integrate with the appliance or consumer goods. One-dimensional (1-D) nanostructures are considered to have the potential to lower their working temperature due to their large surface-to-volume ratio, confined electrical conduction channels, and small feature sizes. Unfortunately, the difficulty of fabricating 1-D nanostructure electrodes has hindered the development of low-temperature MOS gas sensors. In this work, we proposed a method to fabricate nanotube-arrays, and the SnO₂ nanotube-array sensors with different wall thickness were successfully prepared and examined. The fabrication of SnO₂ nanotube arrays incorporates the techniques of barrier-free anodic aluminum oxide (AAO) template and atomic layer deposition (ALD) of SnO₂. First, 1.0 µm Al film was deposited on ITO glass substrate by electron beam evaporation and then anodically oxidized by five wt% phosphoric acid solution at 5°C under a constant voltage of 100 V to form porous aluminum oxide. As the Al film was fully oxidized, a 15 min over anodization and a 30 min post chemical dissolution were used to remove the barrier oxide at the bottom end of pores to generate a barrier-free AAO template. The ALD using reactants of TiCl4 and H₂O was followed to grow a thin layer of SnO₂ on the template to form SnO₂ nanotube arrays. After removing the surface layer of SnO₂ by H₂ plasma and dissolving the template by 5 wt% phosphoric acid solution at 50°C, upright standing SnO₂ nanotube arrays on ITO glass were produced. Finally, Ag top electrode with line width of 5 μm was printed on the nanotube arrays to form SnO₂ nanotube-array sensor. Two SnO₂ nanotube-arrays with wall thickness of 30 and 60 nm were produced in this experiment for the evaluation of gas sensing ability. The flat SnO₂ films with thickness of 30 and 60 nm were also examined for comparison. The results show that the properties of ALD SnO₂ films were related to the deposition temperature. The films grown at 350°C had a low electrical resistivity of 3.6×10-3 Ω-cm and were, therefore, used for the nanotube-array sensors. The carrier concentration and mobility of the SnO₂ films were characterized by Ecopia HMS-3000 Hall-effect measurement system and were 1.1×1020 cm-3 and 16 cm3/V-s, respectively. The electrical resistance of SnO₂ film and nanotube-array sensors in air and in a 5% H₂-95% N₂ mixture gas was monitored by Pico text M3510A 6 1/2 Digits Multimeter. It was found that, at 200 °C, the 30-nm-wall SnO₂ nanotube-array sensor performs the highest responsivity to 5% H₂, followed by the 30-nm SnO₂ film sensor, the 60-nm SnO₂ film sensor, and the 60-nm-wall SnO₂ nanotube-array sensor. However, at temperatures below 100°C, all the samples were insensitive to the 5% H₂ gas. Further investigation on the sensors with thinner SnO₂ is necessary for improving the sensing ability at temperatures below 100 °C.

Keywords: atomic layer deposition, nanotube arrays, gas sensor, tin dioxide

Procedia PDF Downloads 221
487 Prominent Lipid Parameters Correlated with Trunk-to-Leg and Appendicular Fat Ratios in Severe Pediatric Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

The examination of both serum lipid fractions and body’s lipid composition are quite informative during the evaluation of obesity stages. Within this context, alterations in lipid parameters are commonly observed. The variations in the fat distribution of the body are also noteworthy. Total cholesterol (TC), triglycerides (TRG), low density lipoprotein-cholesterol (LDL-C), high density lipoprotein-cholesterol (HDL-C) are considered as the basic lipid fractions. Fat deposited in trunk and extremities may give considerable amount of information and different messages during discrete health states. Ratios are also derived from distinct fat distribution in these areas. Trunk-to-leg fat ratio (TLFR) and trunk-to-appendicular fat ratio (TAFR) are the most recently introduced ratios. In this study, lipid fractions and TLFR, as well as TAFR, were evaluated, and the distinctions among healthy, obese (OB), and morbid obese (MO) groups were investigated. Three groups [normal body mass index (N-BMI), OB, MO] were constituted from a population aged 6 to 18 years. Ages and sexes of the groups were matched. The study protocol was approved by the Non-interventional Ethics Committee of Tekirdag Namik Kemal University. Written informed consent forms were obtained from the parents of the participants. Anthropometric measurements (height, weight, waist circumference, hip circumference, head circumference, neck circumference) were obtained and recorded during the physical examination. Body mass index values were calculated. Total, trunk, leg, and arm fat mass values were obtained by TANITA Bioelectrical Impedance Analysis. These values were used to calculate TLFR and TAFR. Systolic (SBP) and diastolic blood pressures (DBP) were measured. Routine biochemical tests including TC, TRG, LDL-C, HDL-C, and insulin were performed. Data were evaluated using SPSS software. p value smaller than 0.05 was accepted as statistically significant. There was no difference among the age values and gender ratios of the groups. Any statistically significant difference was not observed in terms of DBP, TLFR as well as serum lipid fractions. Higher SBP values were measured both in OB and MO children than those with N-BMI. TAFR showed a significant difference between N-BMI and OB groups. Statistically significant increases were detected between insulin values of N-BMI group and OB as well as MO groups. There were bivariate correlations between LDL and TLFR (r=0.396; p=0.037) as well as TAFR values (r=0.413; p=0.029) in MO group. When adjusted for SBP and DBP, partial correlations were calculated as (r=0.421; p=0.032) and (r=0.438; p=0.025) for LDL-TLFR as well as LDL-TAFR, respectively. Much stronger partial correlations were obtained for the same couples (r=0.475; p=0.019 and r=0.473; p=0.020, respectively) upon controlling for TRG and HDL-C. Much stronger partial correlations observed in MO children emphasize the potential transition from morbid obesity to metabolic syndrome. These findings have concluded that LDL-C may be suggested as a discriminating parameter between OB and MO children.

Keywords: children, lipid parameters, obesity, trunk-to-leg fat ratio, trunk-to-appendicular fat ratio

Procedia PDF Downloads 91
486 Saco Sweet Cherry: Phenolic Profile and Biological Activity of Coloured and Non-Coloured Fractions

Authors: Catarina Bento, Ana Carolina Gonçalves, Fábio Jesus, Luís Rodrigues Silva

Abstract:

Increasing evidence suggests that a diet rich in fruits and vegetables plays important roles in the prevention of chronic diseases, such as heart disease, cancer, stroke, diabetes, Alzheimer’s disease, among others. Fruits and vegetables gained prominence due their richness in bioactive compounds, being the focus of many studies due to their biological properties acting as health promoters. Prunus avium Linnaeus (L.), commonly known as sweet cherry has been the centre of attention due to its health benefits, and has been highly studied. In Portugal, most of the cherry production comes from the Fundão region. The Saco is one of the most important cultivar produced in this region, attributed with geographical protection. In this work, we prepared 3 extracts through solid-phase extraction (SPE): a whole extract, fraction I (non-coloured phenolics) and fraction II (coloured phenolics). The three extracts were used to determine the phenolic profile of Saco cultivar by liquid chromatography with diode array detection (LC-DAD) technique. This was followed by the evaluation of their biological potential, testing the extracts’ capacity to scavenge free-radicals (DPPH•, nitric oxide (•NO) and superoxide radical (O2●-)) and to inhibit α-glucosidase enzyme of all extracts. Additionally, we evaluated, for the first time, the protective effects against peroxyl radical (ROO•)-induced hemoglobin oxidation and hemolysis in human erythrocytes. A total of 16 non-coloured phenolics were detected, 3-O-caffeoylquinic and ρ-coumaroylquinic acids were the main ones, and 6 anthocyanins were found, among which cyanidin-3-O-rutinoside represented the majority. In respect to antioxidant activity, Saco showed great antioxidant potential in a concentration-dependent manner, demonstrated through the DPPH•,•NO and O2●-radicals, and greater ability to inhibit the α-glucosidase enzyme in comparison to the regular drug acarbose used to treat diabetes. Additionally, Saco proved to be effective to protect erythrocytes against oxidative damage in a concentration-dependent manner against hemoglobin oxidation and hemolysis. Our work demonstrated that Saco cultivar is an excellent source of phenolic compounds which are natural antioxidants that easily capture reactive species, such as ROO• before they can attack the erythrocytes’ membrane. In a general way, the whole extract showed the best efficiency, most likely due to a synergetic interaction between the different compounds. Finally, comparing the two separate fractions, the coloured fraction showed the most activity in all the assays, proving to be the biggest contributor of Saco cherries’ biological activity.

Keywords: biological potential, coloured phenolics, non-coloured phenolics, sweet cherry

Procedia PDF Downloads 225
485 Assessment of Five Photoplethysmographic Methods for Estimating Heart Rate Variability

Authors: Akshay B. Pawar, Rohit Y. Parasnis

Abstract:

Heart Rate Variability (HRV) is a widely used indicator of the regulation between the autonomic nervous system (ANS) and the cardiovascular system. Besides being non-invasive, it also has the potential to predict mortality in cases involving critical injuries. The gold standard method for determining HRV is based on the analysis of RR interval time series extracted from ECG signals. However, because it is much more convenient to obtain photoplethysmogramic (PPG) signals as compared to ECG signals (which require the attachment of several electrodes to the body), many researchers have used pulse cycle intervals instead of RR intervals to estimate HRV. They have also compared this method with the gold standard technique. Though most of their observations indicate a strong correlation between the two methods, recent studies show that in healthy subjects, except for a few parameters, the pulse-based method cannot be a surrogate for the standard RR interval- based method. Moreover, the former tends to overestimate short-term variability in heart rate. This calls for improvements in or alternatives to the pulse-cycle interval method. In this study, besides the systolic peak-peak interval method (PP method) that has been studied several times, four recent PPG-based techniques, namely the first derivative peak-peak interval method (P1D method), the second derivative peak-peak interval method (P2D method), the valley-valley interval method (VV method) and the tangent-intersection interval method (TI method) were compared with the gold standard technique. ECG and PPG signals were obtained from 10 young and healthy adults (consisting of both males and females) seated in the armchair position. In order to de-noise these signals and eliminate baseline drift, they were passed through certain digital filters. After filtering, the following HRV parameters were computed from PPG using each of the five methods and also from ECG using the gold standard method: time domain parameters (SDNN, pNN50 and RMSSD), frequency domain parameters (Very low-frequency power (VLF), Low-frequency power (LF), High-frequency power (HF) and Total power or “TP”). Besides, Poincaré plots were also plotted and their SD1/SD2 ratios determined. The resulting sets of parameters were compared with those yielded by the standard method using measures of statistical correlation (correlation coefficient) as well as statistical agreement (Bland-Altman plots). From the viewpoint of correlation, our results show that the best PPG-based methods for the determination of most parameters and Poincaré plots are the P2D method (shows more than 93% correlation with the standard method) and the PP method (mean correlation: 88%) whereas the TI, VV and P1D methods perform poorly (<70% correlation in most cases). However, our evaluation of statistical agreement using Bland-Altman plots shows that none of the five techniques agrees satisfactorily well with the gold standard method as far as time-domain parameters are concerned. In conclusion, excellent statistical correlation implies that certain PPG-based methods provide a good amount of information on the pattern of heart rate variation, whereas poor statistical agreement implies that PPG cannot completely replace ECG in the determination of HRV.

Keywords: photoplethysmography, heart rate variability, correlation coefficient, Bland-Altman plot

Procedia PDF Downloads 298
484 Evaluation of the Cytotoxicity and Cellular Uptake of a Cyclodextrin-Based Drug Delivery System for Cancer Therapy

Authors: Caroline Mendes, Mary McNamara, Orla Howe

Abstract:

Drug delivery systems are proposed for use in cancer treatment to specifically target cancer cells and deliver a therapeutic dose without affecting normal cells. For that purpose, the use of folate receptors (FR) can be considered a key strategy, since they are commonly over-expressed in cancer cells. In this study, cyclodextrins (CD) have being used as vehicles to target FR and deliver the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within their cavities. Here, β-CD has been modified using folic acid so as to specifically target the FR. Thus, this drug delivery system consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 15.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 16.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 10.5 for A549 and 132.6 µM ± 16.1 and 288.1 µM ± 26.3 for BEAS-2B. These results demonstrate that free MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug. The use of cell imaging by confocal microscopy has allowed visualisation of FR targeting in cancer cells, as well as the identification of the interlisation pathway of the drug. Hence, the cellular uptake and internalisation process of this drug delivery system is being addressed.

Keywords: cancer treatment, cyclodextrins, drug delivery, folate receptors, reduced folate carriers

Procedia PDF Downloads 287
483 Toxicity Evaluation of Reduced Graphene Oxide on First Larval Stages of Artemia sp.

Authors: Roberta Pecoraro

Abstract:

The focus of this work was to investigate the potential toxic effect of titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites on nauplii of microcrustacean Artemia sp. In order to assess the nanocomposite’s toxicity, a short-term test was performed by exposing nauplii to solutions containing TiO₂-rGO. To prepare titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites, a green procedure based on solar photoreduction was proposed; it allows to obtain the photocatalysts by exploiting the photocatalytic properties of titania activated by the solar irradiation in order to avoid the high temperatures and pressures required for the standard hydrothermal synthesis. Powders of TiO₂-rGO supplied by the Department of Chemical Sciences (University of Catania) are indicated as TiO₂-rGO at 1% and TiO₂-rGO at 2%. Starting from a stock solution (1mg rGO-TiO₂/10 ml ASPM water) of each type, we tested four different concentrations (serial dilutions ranging from 10⁻¹ to 10⁻⁴ mg/ml). All the solutions have been sonicated for 12 min prior to use. Artificial seawater (called ASPM water) was prepared to guarantee the hatching of the cysts and to maintain nauplii; the durable cysts used in this study, marketed by JBL (JBL GmbH & Co. KG, Germany), were hydrated with ASPM water to obtain nauplii (instar II-III larvae). The hatching of the cysts was carried out in the laboratory by immersing them in ASPM water inside a 500 ml beaker and keeping them constantly oxygenated thanks to an aerator for the insufflation of microbubble air: after 24-48 hours, the cysts hatched, and the nauplii appeared. The nauplii in the second and third stages of development were collected one-to-one, using stereomicroscopes, and transferred into 96-well microplates where one nauplius per well was added. The wells quickly have been filled with 300 µl of each specific concentration of the solution used, and control samples were incubated only with ASPM water. Replication was performed for each concentration. Finally, the microplates were placed on an orbital shaker, and the tests were read after 24 and 48 hours from inoculating the solutions to assess the endpoint (immobility/death) for the larvae. Nauplii that appeared motionless were counted as dead, and the percentages of mortality were calculated for each treatment. The results showed a low percentage of immobilization both for TiO₂-rGO at 1% and TiO₂-rGO at 2% for all concentrations tested: for TiO₂-rGO at 1% was below 12% after 24h and below 15% after 48h; for TiO₂-rGO at 2% was below 8% after 24h and below 12% after 48h. According to other studies in the literature, the results have not shown mortality nor toxic effects on the development of larvae after exposure to rGO. Finally, it is important to highlight that the TiO₂-rGO catalysts were tested in the solar photodegradation of a toxic herbicide (2,4-Dichlorophenoxyacetic acid, 2,4-D), obtaining a high percentage of degradation; therefore, this alternative approach could be considered a good strategy to obtain performing photocatalysts.

Keywords: Nauplii, photocatalytic properties, reduced GO, short-term toxicity test, titanium dioxide

Procedia PDF Downloads 161
482 Reflective Portfolio to Bridge the Gap in Clinical Training

Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab

Abstract:

Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.

Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education

Procedia PDF Downloads 62
481 Evaluation of Monoterpenes Induction in Ugni molinae Ecotypes Subjected to a Red Grape Caterpillar (Lepidoptera: Arctiidae) Herbivory

Authors: Manuel Chacon-Fuentes, Leonardo Bardehle, Marcelo Lizama, Claudio Reyes, Andres Quiroz

Abstract:

The insect-plant interaction is a complex process in which the plant is able to release chemical signaling that modifies the behavior of insects. Insect herbivory can trigger mechanisms that allow the increase in the production of secondary metabolites that allow coping against the herbivores. Monoterpenes are a kind of secondary metabolites involved in direct defense acting as repellents of herbivorous or even in indirect defense acting as attractants for insect predators. In addition, an increase of the monoterpenes concentration is an effect commonly associated with the herbivory. Hence, plants subjected to damage by herbivory increase the monoterpenes production in comparison to plants without herbivory. In this framework, co-evolutionary aspects play a fundamental role in the adaptation of the herbivorous to their host and in the counter-adaptive strategies of the plants to avoid the herbivorous. In this context, Ugni molinae 'murtilla' is a native shrub from Chile characterized by its antioxidant activity mainly related to the phenolic compounds presents in its fruits. The larval stage of the red grape caterpillar Chilesia rudis Butler (Lepidoptera: Arctiidae) has been reported as an important defoliator of U. molinae. This insect is native from Chile and probably has been involved in a co-evolutionary process with murtilla. Therefore, we hypothesized that herbivory by the red grape caterpillar increases the emission of monoterpenes in Ugni molinae. Ecotypes 19-1 and 22-1 of murtilla were established and maintained at 25° C in the Laboratorio de Química Ecológica at Universidad de La Frontera. Red grape caterpillars of ⁓40 mm were collected near to Temuco (Chile) from grasses, and they were deprived of food for 24 h before performing the assays. Ten caterpillars were placed on the foliage of the ecotypes 19-1 and 22-1 and allowed to feed during 48 h. After this time, caterpillars were removed from the ecotypes and monoterpenes were collected. A glass chamber was used to enclose the ecotypes and a Porapak-Q column was used to trap the monoterpenes. After 24 h of capturing, columns were desorbed with hexane. Then, samples were injected in a gas chromatograph coupled to mass spectrometer and monoterpenes were determined according to the NIST library. All the experiments were performed in triplicate. Results showed that α-pinene, β-phellandrene, limonene, and 1,8 cineole were the main monoterpenes released by murtilla ecotypes. For the ecotype 19-1, the abundance of α-pinene was significantly higher in plants subjected to herbivory (100%) in relation to control plants (54.58%). Moreover, β-phellandrene and 1,8 cineole were observed only in control plants. For ecotype 22-1, there was no significant difference in monoterpenes abundance. In conclusion, the results suggest a trade-off of β-phellandrene and 1,8 cineole in response to herbivory damage by red grape caterpillar generating an increase in α-pinene abundance.

Keywords: Chilesia rudis, gas chromatography, monoterpenes, Ugni molinae

Procedia PDF Downloads 131
480 The Impact of Emotional Intelligence on Organizational Performance

Authors: El Ghazi Safae, Cherkaoui Mounia

Abstract:

Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.

Keywords: emotions, performance, intelligence, firms

Procedia PDF Downloads 88
479 Branched Chain Amino Acid Kinesio PVP Gel Tape from Extract of Pea (Pisum sativum L.) Based on Ultrasound-Assisted Extraction Technology

Authors: Doni Dermawan

Abstract:

Modern sports competition as a consequence of the increase in the value of the business and entertainment in the field of sport has been demanding athletes to always have excellent physical endurance performance. Physical exercise is done in a long time, and intensive may pose a risk of muscle tissue damage caused by the increase of the enzyme creatine kinase. Branched Chain Amino Acids (BCAA) is an essential amino acid that is composed of leucine, isoleucine, and valine which serves to maintain muscle tissue, keeping the immune system, and prevent further loss of coordination and muscle pain. Pea (Pisum sativum L.) is a kind of leguminous plants that are rich in Branched Chain Amino Acids (BCAA) where every one gram of protein pea contains 82.7 mg of leucine; 56.3 mg isoleucine; and 56.0 mg of valine. This research aims to develop Branched Chain Amino Acids (BCAA) from pea extract is applied in dosage forms Gel PVP Kinesio Tape technology using Ultrasound-assisted Extraction. The method used in the writing of this paper is the Cochrane Collaboration Review that includes literature studies, testing the quality of the study, the characteristics of the data collection, analysis, interpretation of results, and clinical trials as well as recommendations for further research. Extraction of BCAA in pea done using ultrasound-assisted extraction technology with optimization variables includes the type of solvent extraction (NaOH 0.1%), temperature (20-250C), time (15-30 minutes) power (80 watt) and ultrasonic frequency (35 KHz). The advantages of this extraction method are the level of penetration of the solvent into the membrane of the cell is high and can increase the transfer period so that the BCAA substance separation process more efficient. BCAA extraction results are then applied to the polymer PVP (Polyvinylpyrrolidone) Gel powder composed of PVP K30 and K100 HPMC dissolved in 10 mL of water-methanol (1: 1) v / v. Preparations Kinesio Tape Gel PVP is the BCAA in the gel are absorbed into the muscle tissue, and joints through tensile force then provides stimulation to the muscle circulation with variable pressure so that the muscle can increase the biomechanical movement and prevent damage to the muscle enzyme creatine kinase. Analysis and evaluation of test preparation include interaction, thickness, weight uniformity, humidity, water vapor permeability, the levels of the active substance, content uniformity, percentage elongation, stability testing, release profile, permeation in vitro and in vivo skin irritation testing.

Keywords: branched chain amino acid, BCAA, Kinesio tape, pea, PVP gel, ultrasound-assisted extraction

Procedia PDF Downloads 266
478 Investigation of a Technology Enabled Model of Home Care: the eShift Model of Palliative Care

Authors: L. Donelle, S. Regan, R. Booth, M. Kerr, J. McMurray, D. Fitzsimmons

Abstract:

Palliative home health care provision within the Canadian context is challenged by: (i) a shortage of registered nurses (RN) and RNs with palliative care expertise, (ii) an aging population, (iii) reliance on unpaid family caregivers to sustain home care services with limited support to conduct this ‘care work’, (iv) a model of healthcare that assumes client self-care, and (v) competing economic priorities. In response, an interprofessional team of service provider organizations, a software/technology provider, and health care providers developed and implemented a technology-enabled model of home care, the eShift model of palliative home care (eShift). The eShift model combines communication and documentation technology with non-traditional utilization of health human resources to meet patient needs for palliative care in the home. The purpose of this study was to investigate the structure, processes, and outcomes of the eShift model of care. Methodology: Guided by Donebedian’s evaluation framework for health care, this qualitative-descriptive study investigated the structure, processes, and outcomes care of the eShift model of palliative home care. Interviews and focus groups were conducted with health care providers (n= 45), decision-makers (n=13), technology providers (n=3) and family care givers (n=8). Interviews were recorded, transcribed, and a deductive analysis of transcripts was conducted. Study Findings (1) Structure: The eShift model consists of a remotely-situated RN using technology to direct care provision virtually to patients in their home. The remote RN is connected virtually to a health technician (an unregulated care provider) in the patient’s home using real-time communication. The health technician uses a smartphone modified with the eShift application and communicates with the RN who uses a computer with the eShift application/dashboard. Documentation and communication about patient observations and care activities occur in the eShift portal. The RN is typically accountable for four to six health technicians and patients over an 8-hour shift. The technology provider was identified as an important member of the healthcare team. Other members of the team include family members, care coordinators, nurse practitioners, physicians, and allied health. (2) Processes: Conventionally, patient needs are the focus of care; however within eShift, the patient and the family caregiver were the focus of care. Enhanced medication administration was seen as one of the most important processes, and family caregivers reported high satisfaction with the care provided. There was perceived enhanced teamwork among health care providers. (3) Outcomes: Patients were able to die at home. The eShift model enabled consistency and continuity of care, and effective management of patient symptoms and caregiver respite. Conclusion: More than a technology solution, the eShift model of care was viewed as transforming home care practice and an innovative way to resolve the shortage of palliative care nurses within home care.

Keywords: palliative home care, health information technology, patient-centred care, interprofessional health care team

Procedia PDF Downloads 391
477 Baricitinib Lipid-based Nanosystems as a Topical Alternative for Atopic Dermatitis Treatment

Authors: N. Garrós, P. Bustos, N. Beirampour, R. Mohammadi, M. Mallandrich, A.C. Calpena, H. Colom

Abstract:

Atopic dermatitis (AD) is a persistent skin condition characterized by chronic inflammation caused by an autoimmune response. It is a prevalent clinical issue that requires continual treatment to enhance the patient's quality of life. Systemic therapy often involves the use of glucocorticoids or immunosuppressants to manage symptoms. Our objective was to create and assess topical liposomal formulations containing Baricitinib (BNB), a reversible inhibitor of Janus-associated kinase (JAK), which is involved in various immune responses. These formulations were intended to address flare-ups and improve treatment outcomes for AD. We created three distinct liposomal formulations by combining different amounts of 1-palmitoyl-2-oleoyl-glycero-3-phosphocholine (POPC), cholesterol (CHOL), and ceramide (CER): (i) pure POPC, (ii) POPC mixed with CHOL (at a ratio of 8:2, mol/mol), and (iii) POPC mixed with CHOL and CER (at a ratio of 3.6:2.4:4.0 mol/mol/mol). We conducted various tests to determine the formulations' skin tolerance, irritancy capacity, and their ability to cause erythema and edema on altered skin. We also assessed the transepidermal water loss (TEWL) and skin hydration of rabbits to evaluate the efficacy of the formulations. Histological analysis, the HET-CAM test, and the modified Draize test were all used in the evaluation process. The histological analysis revealed that liposome POPC and POPC:CHOL avoided any damage to the tissues structures. The HET-CAM test showed no irritation effect caused by any of the three liposomes, and the modified Draize test showed a good Draize score for erythema and edema. Liposome POPC effectively counteracted the impact of xylol on the skin, and no erythema or edema was observed during the study. TEWL values were constant for all the liposomes with similar values to the negative control (within the range 8 - 15 g/h·m2, which means a healthy value for rabbits), whereas the positive control showed a significant increase. The skin hydration values were constant and followed the trend of the negative control, while the positive control showed a steady increase during the tolerance study. In conclusion, the developed formulations containing BNB exhibited no harmful or irritating effects, they did not demonstrate any irritant potential in the HET-CAM test and liposomes POPC and POPC:CHOL did not cause any structural alteration according to the histological analysis. These positive findings suggest that additional research is necessary to evaluate the efficacy of these liposomal formulations in animal models of the disease, including mutant animals. Furthermore, before proceeding to clinical trials, biochemical investigations should be conducted to better understand the mechanisms of action involved in these formulations.

Keywords: baricitinib, HET-CAM test, histological study, JAK inhibitor, liposomes, modified draize test

Procedia PDF Downloads 69
476 USBware: A Trusted and Multidisciplinary Framework for Enhanced Detection of USB-Based Attacks

Authors: Nir Nissim, Ran Yahalom, Tomer Lancewiki, Yuval Elovici, Boaz Lerner

Abstract:

Background: Attackers increasingly take advantage of innocent users who tend to use USB devices casually, assuming these devices benign when in fact they may carry an embedded malicious behavior or hidden malware. USB devices have many properties and capabilities that have become the subject of malicious operations. Many of the recent attacks targeting individuals, and especially organizations, utilize popular and widely used USB devices, such as mice, keyboards, flash drives, printers, and smartphones. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched via USB devices. Significance: We propose USBWARE, a project that focuses on the vulnerabilities of USB devices and centers on the development of a comprehensive detection framework that relies upon a crucial attack repository. USBWARE will allow researchers and companies to better understand the vulnerabilities and attacks associated with USB devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The framework of USBWARE is aimed at accurate detection of both known and unknown USB-based attacks by a process that efficiently enhances the framework's detection capabilities over time. The framework will integrate two main security approaches in order to enhance the detection of USB-based attacks associated with a variety of USB devices. The first approach is aimed at the detection of known attacks and their variants, whereas the second approach focuses on the detection of unknown attacks. USBWARE will consist of six independent but complimentary detection modules, each detecting attacks based on a different approach or discipline. These modules include novel ideas and algorithms inspired from or already developed within our team's domains of expertise, including cyber security, electrical and signal processing, machine learning, and computational biology. The establishment and maintenance of the USBWARE’s dynamic and up-to-date attack repository will strengthen the capabilities of the USBWARE detection framework. The attack repository’s infrastructure will enable researchers to record, document, create, and simulate existing and new USB-based attacks. This data will be used to maintain the detection framework’s updatability by incorporating knowledge regarding new attacks. Based on our experience in the cyber security domain, we aim to design the USBWARE framework so that it will have several characteristics that are crucial for this type of cyber-security detection solution. Specifically, the USBWARE framework should be: Novel, Multidisciplinary, Trusted, Lightweight, Extendable, Modular and Updatable and Adaptable. Major Findings: Based on our initial survey, we have already found more than 23 types of USB-based attacks, divided into six major categories. Our preliminary evaluation and proof of concepts showed that our detection modules can be used for efficient detection of several basic known USB attacks. Further research, development, and enhancements are required so that USBWARE will be capable to cover all of the major known USB attacks and to detect unknown attacks. Conclusion: USBWARE is a crucial detection framework that must be further enhanced and developed.

Keywords: USB, device, cyber security, attack, detection

Procedia PDF Downloads 372
475 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms

Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier

Abstract:

Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.

Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability

Procedia PDF Downloads 82
474 Educational Institutional Approach for Livelihood Improvement and Sustainable Development

Authors: William Kerua

Abstract:

The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.

Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture

Procedia PDF Downloads 137
473 The Impact of Team Heterogeneity and Team Reflexivity on Entrepreneurial Decision -Making - Empirical Study in China

Authors: Chang Liu, Rui Xing, Liyan Tang, Guohong Wang

Abstract:

Entrepreneurial actions are based on entrepreneurial decisions. The quality of decisions influences entrepreneurial activities and subsequent new venture performance. Uncertainty of surroundings put heightened demands on the team as a whole, and each team member. Diverse team composition provides rich information, which a team can draw when making complex decisions. However, team heterogeneity may cause emotional conflicts, which is adverse to team outcomes. Thus, the effects of team heterogeneity on team outcomes are complex. Although team heterogeneity is an essential factor influencing entrepreneurial decision-making, there is a lack of empirical analysis on under what conditions team heterogeneity plays a positive role in promoting decision-making quality. Entrepreneurial teams always struggle with complex tasks. How a team shapes its teamwork is key in resolving constant issues. As a collective regulatory process, team reflexivity is characterized by continuous joint evaluation and discussion of team goals, strategies, and processes, and adapt them to current or anticipated circumstances. It enables diversified information to be shared and overtly discussed. Instead of hostile interpretation of opposite opinions team members take them as useful insights from different perspectives. Team reflexivity leads to better integration of expertise to avoid the interference of negative emotions and conflict. Therefore, we propose that team reflexivity is a conditional factor that influences the impact of team heterogeneity on high-quality entrepreneurial decisions. In this study, we identify team heterogeneity as a crucial determinant of entrepreneurial decision quality. Integrating the literature on decision-making and team heterogeneity, we investigate the relationship between team heterogeneity and entrepreneurial decision-making quality, treating team reflexivity as a moderator. We tested our hypotheses using the hierarchical regression method and the data gathered from 63 teams and 205 individual members from 45 new firms in China's first-tier cities such as Beijing, Shanghai, and Shenzhen. This research found that both teams' education heterogeneity and teams' functional background heterogeneity were significantly positively related to entrepreneurial decision-making quality, and the positive relation was stronger in teams with a high level of team reflexivity. While teams' specialization of education heterogeneity was negatively related to decision-making quality, and the negative relationship was weaker in teams with a high level of team reflexivity. We offer two contributions to decision-making and entrepreneurial team literatures. Firstly, our study enriches the understanding of the role of entrepreneurial team heterogeneity in entrepreneurial decision-making quality. Different from previous entrepreneurial decision-making literatures, which focus more on decision-making modes of entrepreneurs and the top management team, this study is a significant attempt to highlight that entrepreneurial team heterogeneity makes a unique contribution to generating high-quality entrepreneurial decisions. Secondly, this study introduced team reflexivity as the moderating variable, to explore the boundary conditions under which the entrepreneurial team heterogeneity play their roles.

Keywords: decision-making quality, entrepreneurial teams, education heterogeneity, functional background heterogeneity, specialization of education heterogeneity

Procedia PDF Downloads 102
472 Spatial Setting in Translation: A Comparative Evaluation of translations from Pre-Islamic Poetry

Authors: Raja Lahiani

Abstract:

This study is concerned with scrutinising translations into English and French of references to locations in the desert of pre-Islamic Arabia. These references are used in the Source Text (ST) within a poetic image. Reference is made to the names of three different mountains in Arabia, namely Qatan, Sitar, and Yadhbul. As these mountains are referred to in the context of the poet’s description of the density and expansion of the clouds, it is crucial to know that while Sitar and Yadhbul are close to each other, Qatan is far away from them. This distance was functional for the poet to describe the expansion of the clouds. This reflects the spacious place (desert) he handled, and the fact that it was possible for him to physically see what he described. The purpose of this image is for the poet to communicate the vastness of the space he managed to see as he was in a moment of contemplation. Thus, knowledge of this characteristic about the setting is capital for the receiver to understand the communicative function of the verse. A corpus of eighteen translations is gathered. These vary between verse and prose renderings. The methodology adopted in this research work is comparative. Comparison is conducted at both the synchronic and diachronic levels; every translation shall be compared to the ST and then to previous translations. The comparative work will prove at the end that the translators who target historical facts do not necessarily succeed in preserving the image of the ST. It also proves that the more recent the translation is, the deeper the translator’s awareness is the link between imagery, setting, and point of view. Since the late eighteenth century and until nowadays, pre-Islamic poetry has been translated into Western languages. Translators differ as to motives, sources, priorities and intellectual backgrounds. A translator's skopoi undoubtedly affect the way s/he handles aspects of the ST. When it comes to culture-specific aspects and details related to setting, the problem is even more complex. Setting is a very important factor that reveals a great deal of the culture of pre-Islamic Arabia as this is remote in place, historical framework and literary tradition from its translators. History is present in pre-Islamic poetry, which justifies the important literature that has been written to extract information and data from it. These are imbedded not only by signalling given facts, events, and meditations but also by means of references to specific locations and landmarks that used to exist at the time. Spatial setting is an integral part of a literary text as it places it within its historical context. The importance of the translator’s awareness of spatial anthropological data before indulging in the process of translation is tested. This is also crucial in measuring the effect of setting loss and setting gain in translation. The findings of this research would ultimately evaluate the extent to which a comparative methodology is reliable in investigating the role of spatial setting awareness in translation.

Keywords: historical context, translation, comparative literature, spatial setting

Procedia PDF Downloads 231
471 CybeRisk Management in Banks: An Italian Case Study

Authors: E. Cenderelli, E. Bruno, G. Iacoviello, A. Lazzini

Abstract:

The financial sector is exposed to the risk of cyber-attacks like any other industrial sector. Furthermore, the topic of CybeRisk (cyber risk) has become particularly relevant given that Information Technology (IT) attacks have increased drastically in recent years, and cannot be stopped by single organizations requiring a response at international and national level. IT risk is never a matter purely for the IT manager, although he clearly plays a key role. A bank's risk management function requires a thorough understanding of the evolving risks as well as the tools and practical techniques available to address them. Upon the request of European and national legislation regarding CybeRisk in the financial system, banks are therefore called upon to strengthen the operational model for CybeRisk management. This will require an important change with a more intense collaboration with the structures that deal with information security for the development of an ad hoc system for the evaluation and control of this type of risk. The aim of the work is to propose a framework for the management and control of CybeRisk that will bridge the gap in the literature regarding the understanding and consideration of CybeRisk as an integral part of business management. The IT function has a strong relevance in the management of CybeRisk, which is perceived mainly as operational risk, but with a positive tendency on the part of risk management to the identification of CybeRisk assessment methods that are increasingly complete, quantitative and able to better describe the possible impacts on the business. The paper provides answers to the research questions: Is it possible to define a CybeRisk governance structure able to support the comparison between risk and security? How can the relationships between IT assets be integrated into a cyberisk assessment framework to guarantee a system of protection and risks control? From a methodological point of view, this research uses a case study approach. The choice of “Monte dei Paschi di Siena” was determined by the specific features of one of Italy’s biggest lenders. It is chosen to use an intensive research strategy: an in-depth study of reality. The case study methodology is an empirical approach to explore a complex and current phenomenon that develops over time. The use of cases has also the advantage of allowing the deepening of aspects concerning the "how" and "why" of contemporary events, on which the scholar has little control. The research bases on quantitative data and qualitative information obtained through semi-structured interviews of an open-ended nature and questionnaires to directors, members of the audit committee, risk, IT and compliance managers, and those responsible for internal audit function and anti-money laundering. The added value of the paper can be seen in the development of a framework based on a mapping of IT assets from which it is possible to identify their relationships for purposes of a more effective management and control of cyber risk.

Keywords: bank, CybeRisk, information technology, risk management

Procedia PDF Downloads 221