Search results for: conflicting edges
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 513

Search results for: conflicting edges

63 The Misuse of Free Cash and Earnings Management: An Analysis of the Extent to Which Board Tenure Mitigates Earnings Management

Authors: Michael McCann

Abstract:

Managerial theories propose that, in joint stock companies, executives may be tempted to waste excess free cash on unprofitable projects to keep control of resources. In order to conceal their projects' poor performance, they may seek to engage in earnings management. On the one hand, managers may manipulate earnings upwards in order to post ‘good’ performances and safeguard their position. On the other, since managers pursuit of unrewarding investments are likely to lead to low long-term profitability, managers will use negative accruals to reduce current year’s earnings, smoothing earnings over time in order to conceal the negative effects. Agency models argue that boards of directors are delegated by shareholders to ensure that companies are governed properly. Part of that responsibility is ensuring the reliability of financial information. Analyses of the impact of board characteristics, particularly board independence on the misuse of free cash flow and earnings management finds conflicting evidence. However, existing characterizations of board independence do not account for such directors gaining firm-specific knowledge over time, influencing their monitoring ability. Further, there is little analysis of the influence of the relative experience of independent directors and executives on decisions surrounding the use of free cash. This paper contributes to this literature regarding the heterogeneous characteristics of boards by investigating the influence of independent director tenure on earnings management and the relative tenures of independent directors and Chief Executives. A balanced panel dataset comprising 51 companies across 11 annual periods from 2005 to 2015 is used for the analysis. In each annual period, firms were classified as conducting earnings management if they had discretionary accruals in the bottom quartile (downwards) and top quartile (upwards) of the distributed values for the sample. Logistical regressions were conducted to determine the marginal impact of independent board tenure and a number of control variables on the probability of conducting earnings management. The findings indicate that both absolute and relative measures of board independence and experience do not have a significant impact on the likelihood of earnings management. It is the level of free cash flow which is the major influence on the probability of earnings management. Higher free cash flow increases the probability of earnings management significantly. The research also investigates whether board monitoring of earnings management is contingent on the level of free cash flow. However, the results suggest that board monitoring is not amplified when free cash flow is higher. This suggests that the extent of earnings management in companies is determined by a range of company, industry and situation-specific factors.

Keywords: corporate governance, boards of directors, agency theory, earnings management

Procedia PDF Downloads 236
62 Midwives’ Perceptions and Experiences of Recommending and Delivering Vaccines to Pregnant Women Following the COVID-19 Pandemic

Authors: Cath Grimley, Debra Bick, Sarah Hillman, Louise Clarke, Helen Atherton, Jo Parsons

Abstract:

The problem: Women in the UK are offered influenza (flu), pertussis (whooping cough) and COVID-19 vaccinations during their pregnancy but uptake of all three vaccines is below the desired rate. These vaccines are offered during pregnancy as pregnant women are at an increased risk of hospitalisation, morbidity, and mortality from these illnesses. Exposure to these diseases during pregnancy can also have a negative impact on the unborn baby with an increased risk of serious complications both while in utero and following birth. The research aims to explore perceptions about the vaccinations offered in pregnancy both from the perspectives of pregnant women and midwives. To determine factors that influence pregnant women’s decisions about whether or not to accept the vaccines following the Covid-19 pandemic and to explore midwives’ experiences of recommending and delivering vaccines. The approach: This research follows a qualitative design involving semi-structured interviews with pregnant women and midwives in the UK. Interviews with midwives explored vaccination discussions they routinely have with pregnant women and identified some of the barriers to vaccination that pregnant women discuss with them. Interviews with pregnant women explored their views since the COVID-19 pandemic about vaccinations offered during pregnancy, and whether the pandemic has influenced perceptions of vulnerability to illness in pregnant women. Midwives were recruited via participating hospitals and midwife specific social media groups. Pregnant women were recruited via participating hospitals and community groups. All interviews were conducted remotely (using telephone or Microsoft Teams) and analysed using thematic analysis. Findings: 43 pregnant women and 16 midwives were recruited and interviewed. The findings presented here will focus on data from midwives. Topics identified included three key themes for midwives. These were 1) Delivery of vaccinations which includes the convenience of offering vaccinations while attending standard antenatal appointments and practical barriers faced in delivering these vaccinations at hospital. 2) Messages and guidance included the importance of up-to-date informational needs for midwives to deliver vaccines and that uncertainty and conflicting messages about the COVID-19 vaccine during pregnancy were a barrier to delivery. 3) Recommendations to have vaccines look at all aspects of recommendations such as how recommendations are communicated, the contents of the recommendation, the importance of the vaccine and the impact of those recommendations on whether women accept the vaccine. Implications: Findings highlight the importance for midwives to receive clear and consistent information so they can feel confident in relaying this information while recommending and delivering vaccines to pregnant women. Emphasising why vaccines are important when recommending vaccinations to pregnant women in addition to standard information on the availability and timing will add to the strength and impact of that recommendation in helping women to make informed decisions about accepting vaccines. The findings of this study will inform the development of an intervention to increase vaccination uptake amongst pregnant women.

Keywords: vaccination, pregnancy, qualitative, interviews, Covid-19, midwives

Procedia PDF Downloads 99
61 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries

Authors: Eva Masson, Andrea Kübler

Abstract:

Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.

Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG

Procedia PDF Downloads 41
60 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 317
59 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 82
58 Anti-Graft Instruments and Their Role in Curbing Corruption: Integrity Pact and Its Impact on Indian Procurement

Authors: Jot Prakash Kaur

Abstract:

The paper aims to showcase that with the introduction of anti-graft instruments and willingness of the governments towards their implementation, a significant change can be witnessed in the anti-corruption landscape of any country. Since the past decade anti-graft instruments have been introduced by several international non-governmental organizations with the vision of curbing corruption. Transparency International’s ‘Integrity Pact’ has been one such initiative. Integrity Pact has been described as a tool for preventing corruption in public contracting. Integrity Pact has found its relevance in a developing country like India where public procurement constitutes 25-30 percent of Gross Domestic Product. Corruption in public procurement has been a cause of concern even though India has in place a whole architecture of rules and regulations governing public procurement. Integrity Pact was first adopted by a leading Oil and Gas government company in 2006. Till May 2015, over ninety organizations had adopted Integrity Pact, of which majority of them are central government units. The methodology undertaken to understand impact of Integrity Pact on Public procurement is through analyzing information received from important stakeholders of the instrument. Government, information was sought through Right to Information Act 2005 about the details of adoption of this instrument by various government organizations and departments. Contractor, Company websites and annual reports were used to find out the steps taken towards implementation of Integrity Pact. Civil Society, Transparency International India’s resource materials which include publications and reports on Integrity Pact were also used to understand the impact of Integrity Pact. Some of the findings of the study include organizations adopting Integrity pacts in all kinds of contracts such that 90% of their procurements fall under Integrity Pact. Indian State governments have found merit in Integrity Pact and have adopted it in their procurement contracts. Integrity Pact has been instrumental in creating a brand image of companies. External Monitors, an essential feature of Integrity Pact have emerged as arbitrators for the bidders and are the first line of procurement auditors for the organizations. India has cancelled two defense contracts finding it conflicting with the provisions of Integrity Pact. Some of the clauses of Integrity Pact have been included in the proposed Public Procurement legislation. Integrity Pact has slowly but steadily grown to become an integral part of big ticket procurement in India. Government’s commitment to implement Integrity Pact has changed the way in which public procurement is conducted in India. Public Procurement was a segment infested with corruption but with the adoption of Integrity Pact a number of clean up acts have been performed to make procurement transparent. The paper is divided in five sections. First section elaborates on Integrity Pact. Second section talks about stakeholders of the instrument and the role it plays in its implementation. Third section talks about the efforts taken by the government to implement Integrity Pact in India. Fourth section talks about the role of External Monitor as Arbitrator. The final section puts forth suggestions to strengthen the existing form of Integrity Pact and increase its reach.

Keywords: corruption, integrity pact, procurement, vigilance

Procedia PDF Downloads 342
57 Positron Emission Tomography Parameters as Predictors of Pathologic Response and Nodal Clearance in Patients with Stage IIIA NSCLC Receiving Trimodality Therapy

Authors: Andrea L. Arnett, Ann T. Packard, Yolanda I. Garces, Kenneth W. Merrell

Abstract:

Objective: Pathologic response following neoadjuvant chemoradiation (CRT) has been associated with improved overall survival (OS). Conflicting results have been reported regarding the pathologic predictive value of positron emission tomography (PET) response in patients with stage III lung cancer. The aim of this study was to evaluate the correlation between post-treatment PET response and pathologic response utilizing novel FDG-PET parameters. Methods: This retrospective study included patients with non-metastatic, stage IIIA (N2) NSCLC cancer treated with CRT followed by resection. All patients underwent PET prior to and after neoadjuvant CRT. Univariate analysis was utilized to assess correlations between PET response, nodal clearance, pCR, and near-complete pathologic response (defined as the microscopic residual disease or less). Maximal standard uptake value (SUV), standard uptake ratio (SUR) [normalized independently to the liver (SUR-L) and blood pool (SUR-BP)], metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured pre- and post-chemoradiation. Results: A total of 44 patients were included for review. Median age was 61.9 years, and median follow-up was 2.6 years. Histologic subtypes included adenocarcinoma (72.2%) and squamous cell carcinoma (22.7%), and the majority of patients had the T2 disease (59.1%). The rate of pCR and near-complete pathologic response within the primary lesion was 28.9% and 44.4%, respectively. The average reduction in SUVmₐₓ was 9.2 units (range -1.9-32.8), and the majority of patients demonstrated some degree of favorable treatment response. SUR-BP and SUR-L showed a mean reduction of 4.7 units (range -0.1-17.3) and 3.5 units (range –1.7-12.6), respectively. Variation in PET response was not significantly associated with histologic subtype, concurrent chemotherapy type, stage, or radiation dose. No significant correlation was found between pathologic response and absolute change in MTV or TLG. Reduction in SUVmₐₓ and SUR were associated with increased rate of pathologic response (p ≤ 0.02). This correlation was not impacted by normalization of SUR to liver versus mediastinal blood pool. A threshold of > 75% decrease in SUR-L correlated with near-complete response, with a sensitivity of 57.9% and specificity of 85.7%, as well as positive and negative predictive values of 78.6% and 69.2%, respectively (diagnostic odds ratio [DOR]: 5.6, p=0.02). A threshold of >50% decrease in SUR was also significantly associated pathologic response (DOR 12.9, p=0.2), but specificity was substantially lower when utilizing this threshold value. No significant association was found between nodal PET parameters and pathologic nodal clearance. Conclusions: Our results suggest that treatment response to neoadjuvant therapy as assessed on PET imaging can be a predictor of pathologic response when evaluated via SUV and SUR. SUR parameters were associated with higher diagnostic odds ratios, suggesting improved predictive utility compared to SUVmₐₓ. MTV and TLG did not prove to be significant predictors of pathologic response but may warrant further investigation in a larger cohort of patients.

Keywords: lung cancer, positron emission tomography (PET), standard uptake ratio (SUR), standard uptake value (SUV)

Procedia PDF Downloads 234
56 Injunctions, Disjunctions, Remnants: The Reverse of Unity

Authors: Igor Guatelli

Abstract:

The universe of aesthetic perception entails impasses about sensitive divergences that each text or visual object may be subjected to. If approached through intertextuality that is not based on the misleading notion of kinships or similarities a priori admissible, the possibility of anachronistic, heterogeneous - and non-diachronic - assemblies can enhance the emergence of interval movements, intermediate, and conflicting, conducive to a method of reading, interpreting, and assigning meaning that escapes the rigid antinomies of the mere being and non-being of things. In negative, they operate in a relationship built by the lack of an adjusted meaning set by their positive existences, with no remainders; the generated interval becomes the remnant of each of them; it is the opening that obscures the stable positions of each one. Without the negative of absence, of that which is always missing or must be missing in a text, concept, or image made positive by history, nothing is perceived beyond what has been already given. Pairings or binary oppositions cannot lead only to functional syntheses; on the contrary, methodological disturbances accumulated by the approximation of signs and entities can initiate a process of becoming as an opening to an unforeseen other, transformation until a moment when the difficulties of [re]conciliation become the mainstay of a future of that sign/entity, not envisioned a priori. A counter-history can emerge from these unprecedented, misadjusted approaches, beginnings of unassigned injunctions and disjunctions, in short, difficult alliances that open cracks in a supposedly cohesive history, chained in its apparent linearity with no remains, understood as a categorical historical imperative. Interstices are minority fields that, because of their opening, are capable of causing opacity in that which, apparently, presents itself with irreducible clarity. Resulting from an incomplete and maladjusted [at the least dual] marriage between the signs/entities that originate them, this interval may destabilize and cause disorder in these entities and their own meanings. The interstitials offer a hyphenated relationship: a simultaneous union and separation, a spacing between the entity’s identity and its otherness or, alterity. One and the other may no longer be seen without the crack or fissure that now separates them, uniting, by a space-time lapse. Ontological, semantic shifts are caused by this fissure, an absence between one and the other, one with and against the other. Based on an improbable approximation between some conceptual and semantic shifts within the design production of architect Rem Koolhaas and the textual production of the philosopher Jacques Derrida, this article questions the notion of unity, coherence, affinity, and complementarity in the process of construction of thought from these ontological, epistemological, and semiological fissures that rattle the signs/entities and their stable meanings. Fissures in a thought that is considered coherent, cohesive, formatted are the negativity that constitutes the interstices that allow us to move towards what still remains as non-identity, which allows us to begin another story.

Keywords: clearing, interstice, negative, remnant, spectrum

Procedia PDF Downloads 134
55 India’s Energy Transition, Pathways for Green Economy

Authors: B. Sudhakara Reddy

Abstract:

In modern economy, energy is fundamental to virtually every product and service in use. It has been developed on the dependence of abundant and easy-to-transform polluting fossil fuels. On one hand, increase in population and income levels combined with increased per capita energy consumption requires energy production to keep pace with economic growth, and on the other, the impact of fossil fuel use on environmental degradation is enormous. The conflicting policy objectives of protecting the environment while increasing economic growth and employment has resulted in this paradox. Hence, it is important to decouple economic growth from environmental degeneration. Hence, the search for green energy involving affordable, low-carbon, and renewable energies has become global priority. This paper explores a transition to a sustainable energy system using the socio-economic-technical scenario method. This approach takes into account the multifaceted nature of transitions which not only require the development and use of new technologies, but also of changes in user behaviour, policy and regulation. The scenarios that are developed are: baseline business as usual (BAU) as well as green energy (GE). The baseline scenario assumes that the current trends (energy use, efficiency levels, etc.) will continue in future. India’s population is projected to grow by 23% during 2010 –2030, reaching 1.47 billion. The real GDP, as per the model, is projected to grow by 6.5% per year on average between 2010 and 2030 reaching US$5.1 trillion or $3,586 per capita (base year 2010). Due to increase in population and GDP, the primary energy demand will double in two decades reaching 1,397 MTOE in 2030 with the share of fossil fuels remaining around 80%. The increase in energy use corresponds to an increase in energy intensity (TOE/US $ of GDP) from 0.019 to 0.036. The carbon emissions are projected to increase by 2.5 times from 2010 reaching 3,440 million tonnes with per capita emissions of 2.2 tons/annum. However, the carbon intensity (tons per US$ of GDP) decreases from 0.96 to 0.67. As per GE scenario, energy use will reach 1079 MTOE by 2030, a saving of about 30% over BAU. The penetration rate of renewable energy resources will reduce the total primary energy demand by 23% under GE. The reduction in fossil fuel demand and focus on clean energy will reduce the energy intensity to 0.21 (TOE/US$ of GDP) and carbon intensity to 0.42 (ton/US$ of GDP) under the GE scenario. The study develops new ‘pathways out of poverty’ by creating more than 10 million jobs and thus raise the standard of living of low-income people. Our scenarios are, to a great extent, based on the existing technologies. The challenges to this path lie in socio-economic-political domains. However, to attain a green economy the appropriate policy package should be in place which will be critical in determining the kind of investments that will be needed and the incidence of costs and benefits. These results provide a basis for policy discussions on investments, policies and incentives to be put in place by national and local governments.

Keywords: energy, renewables, green technology, scenario

Procedia PDF Downloads 250
54 Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web

Authors: Aayushi Somani, Siba P. Samal

Abstract:

Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.

Keywords: 3D compression, 3D mesh, 3D web, chromium, client-server architecture, e-commerce, level of details, parallelization, progressive compression, WebGL, WebVR

Procedia PDF Downloads 170
53 Understanding the Cause(S) of Social, Emotional and Behavioural Difficulties of Adolescents with ADHD and Its Implications for the Successful Implementation of Intervention(S)

Authors: Elisavet Kechagia

Abstract:

Due to the interplay of different genetic and environmental risk factors and its heterogeneous nature, the concept of attention deficit hyperactivity disorder (ADHD) has shaped controversy and conflicts, which have been, in turn, reflected in the controversial arguments about its treatment. Taking into account recent well evidence-based researches suggesting that ADHD is a condition, in which biopsychosocial factors are all weaved together, the current paper explores the multiple risk-factors that are likely to influence ADHD, with a particular focus on adolescents with ADHD who might experience comorbid social, emotional and behavioural disorders (SEBD). In the first section of this paper, the primary objective was to investigate the conflicting ideas regarding the definition, diagnosis and treatment of ADHD at an international level as well as to critically examine and identify the limitations of the two most prevailing sets of diagnostic criteria that inform current diagnosis, the American Psychiatric Association’s (APA) diagnostic scheme, DSM-V, and the World Health Organisation’s (WHO) classification of diseases, ICD-10. Taking into consideration the findings of current longitudinal studies on ADHD association with high rates of comorbid conditions and social dysfunction, in the second section the author moves towards an investigation of the transitional points −physical, psychological and social ones− that students with ADHD might experience during early adolescence, as informed by neuroscience and developmental contextualism theory. The third section is an exploration of the different perspectives of ADHD as reflected in individuals’ with ADHD self-reports and the KENT project’s findings on school staff’s attitudes and practices. In the last section, given the high rates of SEBDs in adolescents with ADHD, it is examined how cognitive behavioural therapy (CBT), coupled with other interventions, could be effective in ameliorating anti-social behaviours and/or other emotional and behavioral difficulties of students with ADHD. The findings of a range of randomised control studies indicate that CBT might have positive outcomes in adolescents with multiple behavioural problems, hence it is suggested to be considered both in schools and other community settings. Finally, taking into account the heterogeneous nature of ADHD, the different biopsychosocial and environmental risk factors that take place during adolescence and the discourse and practices concerning ADHD and SEBD, it is suggested how it might be possible to make sense of and meaningful improvements to the education of adolescents with ADHD within a multi-modal and multi-disciplinary whole-school approach that addresses the multiple problems that not only students with ADHD but also their peers might experience. Further research that would be based on more large-scale controls and would investigate the effectiveness of various interventions, as well as the profiles of those students who have benefited from particular approaches and those who have not, will generate further evidence concerning the psychoeducation of adolescents with ADHD allowing for generalised conclusions to be drawn.

Keywords: adolescence, attention deficit hyperctivity disorder, cognitive behavioural theory, comorbid social emotional behavioural disorders, treatment

Procedia PDF Downloads 320
52 Impact of Sufism on Indian Cinema: A New Cultural Construct for Mediating Conflict

Authors: Ravi Chaturvedi, Ghanshyam Beniwal

Abstract:

Without going much into the detail of long history of Sufism in the world and the etymological definition of the word ‘Sufi’, it will be sufficient to underline that the concept of Sufism was to focus the mystic power on the spiritual dimension of Islam with a view-shielding the believers from the outwardly and unrealistic dogma of the faith. Sufis adopted rather a liberal view in propagating the religious order of Islam suitable to the cultural and social environment of the land. It is, in fact, a mission of higher religious order of any faith, which disdains strife and conflict in any form. The joy of self-realization being the essence of religion is experienced after a long spiritual practice. India had Sufi and Bhakti (devotion) traditions in Islam and Hinduism, respectively. Both Sufism and Bhakti traditions were based on respect for different religions. The poorer and lower caste Hindus and Muslims were greatly influenced by these traditions. Unlike Ulemas and Brahmans, the Sufi and Bhakti saints were highly tolerant and open to the truth in other faiths. They never adopted sectarian attitudes and were never involved in power struggles. They kept away from power structures. Sufism is integrated with the Indian cinema since its initial days. In the earliest Bollywood movies, Sufism was represented in the form of qawwali which made its way from dargahs (shrines). Mixing it with pop influences, Hindi movies began using Sufi music in a big way only in the current decade. However, of late, songs with Sufi influences have become de rigueur in almost every film being released these days, irrespective of the genre, whether it is a romantic Gangster or a cerebral Corporate. 'Sufi is in the DNA of the Indian sub-continent', according to several contemporary filmmakers, critics, and spectators.The inherent theatricality motivates the performer of the 'Sufi' rituals for a dramatic behavior. The theatrical force of these stages of Sufi practice is so powerful that even the spectator cannot resist himself from being moved. In a multi-cultural country like India, the mediating streams have acquired a multi-layered importance in recent history. The second half of Indian post-colonial era has witnessed a regular chain of some conflicting religio-political waves arising from various sectarian camps in the country, which have compelled the counter forces to activate for keeping the spirit of composite cultural ethos alive. The study has revealed that the Sufi practice methodology is also being adapted for inclusion of spirituality in life at par to Yoga practice. This paper, a part of research study, is an attempt to establish that the Sufism in Indian cinema is one such mediating voice which is very active and alive throughout the length and width of the country continuously bridging the gap between various religious and social factions, and have a significant role to play in future as well.

Keywords: Indian cinema, mediating voice, Sufi, yoga practice

Procedia PDF Downloads 497
51 Jigger Flea (Tunga penetrans) Infestations and Use of Soil-Cow Dung-Ash Mixture as a Flea Control Method in Eastern Uganda

Authors: Gerald Amatre, Julius Bunny Lejju, Morgan Andama

Abstract:

Despite several interventions, jigger flea infestations continue to be reported in the Busoga sub-region in Eastern Uganda. The purpose of this study was to identify factors that expose the indigenous people to jigger flea infestations and evaluate the effectiveness of any indigenous materials used in flea control by the affected communities. Flea compositions in residences were described, factors associated with flea infestation and indigenous materials used in flea control were evaluated. Field surveys were conducted in the affected communities after obtaining preliminary information on jigger infestation from the offices of the District Health Inspectors to identify the affected villages and households. Informed consent was then sought from the local authorities and household heads to conduct the study. Focus group discussions were conducted with key district informants, namely, the District Health Inspectors, District Entomologists and representatives from the District Health Office. A GPS coordinate was taken at central point at every household enrolled. Fleas were trapped inside residences using Kilonzo traps. A Kilonzo Trap comprised a shallow pan, about three centimetres deep, filled to the brim with water. The edges of the pan were smeared with Vaseline to prevent fleas from crawling out. Traps were placed in the evening and checked every morning the following day. The trapped fleas were collected in labelled vials filled with 70% aqueous ethanol and taken to the laboratory for identification. Socio-economic and environmental data were collected. The results indicate that the commonest flea trapped in the residences was the cat flea (Ctenocephalides felis) (50%), followed by Jigger flea (Tunga penetrans) (46%) and rat flea (Xenopsylla Cheopis) (4%), respectively. The average size of residences was seven squire metres with a mean of six occupants. The residences were generally untidy; with loose dusty floors and the brick walls were not plastered. The majority of the jigger affected households were headed by peasants (86.7%) and artisans (13.3%). The household heads mainly stopped at primary school level (80%) and few at secondary school level (20%). The jigger affected households were mainly headed by peasants of low socioeconomic status. The affected community members use soil-cow dung-ash mixture to smear floors of residences as the only measure to control fleas. This method was found to be ineffective in controlling the insects. The study recommends that home improvement campaigns be continued in the affected communities to improve sanitation and hygiene in residences as one of the interventions to combat flea infestations. Other cheap, available and effective means should be identified to curb jigger flea infestations.

Keywords: cow dung-soil-ash mixture, infestations, jigger flea, Tunga penetrans

Procedia PDF Downloads 136
50 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
49 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix

Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti

Abstract:

When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.

Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS

Procedia PDF Downloads 299
48 Implementation of Language Policy in a Swedish Multicultural Early Childhood School: A Development Project

Authors: Carina Hermansson

Abstract:

This presentation focuses a development project aiming at developing and documenting the steps taken at a multilingual, multicultural K-5 school, with the aim to improve the achievement levels of the pupils by focusing language and literacy development across the schedule in a digital classroom, and in all units of the school. This pre-formulated aim, thus, may be said to adhere to neoliberal educational and accountability policies in terms of its focus on digital learning, learning results, and national curriculum standards. In particular the project aimed at improving the collaboration between the teachers, the leisure time unit, the librarians, the mother tongue teachers and bilingual study counselors. This is a school environment characterized by cultural, ethnic, linguistic, and professional pluralization. The overarching aims of the research project were to scrutinize and analyze the factors enabling and obstructing the implementation of the Language Policy in a digital classroom. Theoretical framework: We apply multi-level perspectives in the analyses inspired by Uljens’ ideas about interactive and interpersonal first order (teacher/students) and second order(principal/teachers and other staff) educational leadership as described within the framework of discursive institutionalism, when we try to relate the Language Policy, educational policy, and curriculum with the administrative processes. Methodology/research design: The development project is based on recurring research circles where teachers, leisure time assistants, mother tongue teachers and study counselors speaking the mother tongue of the pupils together with two researchers discuss their digital literacy practices in the classroom. The researchers have in collaboration with the principal developed guidelines for the work, expressed in a Language Policy document. In our understanding the document is, however, only a part of the concept, the actions of the personnel and their reflections on the practice constitute the major part of the development project. One and a half years out of three years have now passed and the project has met with a row of difficulties which shed light on factors of importance for the progress of the development project. Field notes and recordings from the research circles, a survey with the personnel, and recorded group interviews provide data on the progress of the project. Expected conclusions: The problems experienced deal with leadership, curriculum, interplay between aims, technology, contents and methods, the parents as customers taking their children to other schools, conflicting values, and interactional difficulties, that is, phenomena on different levels, ranging from school to a societal level, as for example teachers being substituted as a result of the marketization of schools. Also underlying assumptions from actors at different levels create obstacles. We find this study and the problems we are facing utterly important to share and discuss in an era with a steady flow of refugees arriving in the Nordic countries.

Keywords: early childhood education, language policy, multicultural school, school development project

Procedia PDF Downloads 145
47 Calculation of Organ Dose for Adult and Pediatric Patients Undergoing Computed Tomography Examinations: A Software Comparison

Authors: Aya Al Masri, Naima Oubenali, Safoin Aktaou, Thibault Julien, Malorie Martin, Fouad Maaloul

Abstract:

Introduction: The increased number of performed 'Computed Tomography (CT)' examinations raise public concerns regarding associated stochastic risk to patients. In its Publication 102, the ‘International Commission on Radiological Protection (ICRP)’ emphasized the importance of managing patient dose, particularly from repeated or multiple examinations. We developed a Dose Archiving and Communication System that gives multiple dose indexes (organ dose, effective dose, and skin-dose mapping) for patients undergoing radiological imaging exams. The aim of this study is to compare the organ dose values given by our software for patients undergoing CT exams with those of another software named "VirtualDose". Materials and methods: Our software uses Monte Carlo simulations to calculate organ doses for patients undergoing computed tomography examinations. The general calculation principle consists to simulate: (1) the scanner machine with all its technical specifications and associated irradiation cases (kVp, field collimation, mAs, pitch ...) (2) detailed geometric and compositional information of dozens of well identified organs of computational hybrid phantoms that contain the necessary anatomical data. The mass as well as the elemental composition of the tissues and organs that constitute our phantoms correspond to the recommendations of the international organizations (namely the ICRP and the ICRU). Their body dimensions correspond to reference data developed in the United States. Simulated data was verified by clinical measurement. To perform the comparison, 270 adult patients and 150 pediatric patients were used, whose data corresponds to exams carried out in France hospital centers. The comparison dataset of adult patients includes adult males and females for three different scanner machines and three different acquisition protocols (Head, Chest, and Chest-Abdomen-Pelvis). The comparison sample of pediatric patients includes the exams of thirty patients for each of the following age groups: new born, 1-2 years, 3-7 years, 8-12 years, and 13-16 years. The comparison for pediatric patients were performed on the “Head” protocol. The percentage of the dose difference were calculated for organs receiving a significant dose according to the acquisition protocol (80% of the maximal dose). Results: Adult patients: for organs that are completely covered by the scan range, the maximum percentage of dose difference between the two software is 27 %. However, there are three organs situated at the edges of the scan range that show a slightly higher dose difference. Pediatric patients: the percentage of dose difference between the two software does not exceed 30%. These dose differences may be due to the use of two different generations of hybrid phantoms by the two software. Conclusion: This study shows that our software provides a reliable dosimetric information for patients undergoing Computed Tomography exams.

Keywords: adult and pediatric patients, computed tomography, organ dose calculation, software comparison

Procedia PDF Downloads 164
46 Shocks and Flows - Employing a Difference-In-Difference Setup to Assess How Conflicts and Other Grievances Affect the Gender and Age Composition of Refugee Flows towards Europe

Authors: Christian Bruss, Simona Gamba, Davide Azzolini, Federico Podestà

Abstract:

In this paper, the authors assess the impact of different political and environmental shocks on the size and on the age and gender composition of asylum-related migration flows to Europe. With this paper, the authors contribute to the literature by looking at the impact of different political and environmental shocks on the gender and age composition of migration flows in addition to the size of these flows. Conflicting theories predict different outcomes concerning the relationship between political and environmental shocks and the migration flows composition. Analyzing the relationship between the causes of migration and the composition of migration flows could yield more insights into the mechanisms behind migration decisions. In addition, this research may contribute to better informing national authorities in charge of receiving these migrant, as women and children/the elderly require different assistance than young men. To be prepared to offer the correct services, the relevant institutions have to be aware of changes in composition based on the shock in question. The authors analyze the effect of different types of shocks on the number, the gender and age composition of first time asylum seekers originating from 154 sending countries. Among the political shocks, the authors consider: violence between combatants, violence against civilians, infringement of political rights and civil liberties, and state terror. Concerning environmental shocks, natural disasters (such as droughts, floods, epidemics, etc.) have been included. The data on asylum seekers applying to any of the 32 Schengen Area countries between 2008 and 2015 is on a monthly basis. Data on asylum applications come from Eurostat, data on shocks are retrieved from various sources: georeferenced conflict data come from the Uppsala Conflict Data Program (UCDP), data on natural disasters from the Centre for Research on the Epidemiology of Disasters (CRED), data on civil liberties and political rights from Freedom House, data on state terror from the Political Terror Scale (PTS), GDP and population data from the World Bank, and georeferenced population data from the Socioeconomic Data and Applications Center (SEDAC). The authors adopt a Difference-in-Differences identification strategy, exploiting the different timing of several kinds of shocks across countries. The highly skewed distribution of the dependent variable is taken into account by using count data models. In particular, a Zero Inflated Negative Binomial model is adopted. Preliminary results show that different shocks - such as armed conflict and epidemics - exert weak immediate effects on asylum-related migration flows and almost non-existent effects on the gender and age composition. However, this result is certainly affected by the fact that no time lags have been introduced so far. Finding the correct time lags depends on a great many variables not limited to distance alone. Therefore, finding the appropriate time lags is still a work in progress. Considering the ongoing refugee crisis, this topic is more important than ever. The authors hope that this research contributes to a less emotionally led debate.

Keywords: age, asylum, Europe, forced migration, gender

Procedia PDF Downloads 262
45 Sustainability of the Built Environment of Ranchi District

Authors: Vaidehi Raipat

Abstract:

A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.

Keywords: adaptable, built-environment, sustainability, urban

Procedia PDF Downloads 237
44 Framework Proposal on How to Use Game-Based Learning, Collaboration and Design Challenges to Teach Mechatronics

Authors: Michael Wendland

Abstract:

This paper presents a framework to teach a methodical design approach by the help of using a mixture of game-based learning, design challenges and competitions as forms of direct assessment. In today’s world, developing products is more complex than ever. Conflicting goals of product cost and quality with limited time as well as post-pandemic part shortages increase the difficulty. Common design approaches for mechatronic products mitigate some of these effects by helping the users with their methodical framework. Due to the inherent complexity of these products, the number of involved resources and the comprehensive design processes, students very rarely have enough time or motivation to experience a complete approach in one semester course. But, for students to be successful in the industrial world, it is crucial to know these methodical frameworks and to gain first-hand experience. Therefore, it is necessary to teach these design approaches in a real-world setting and keep the motivation high as well as learning to manage upcoming problems. This is achieved by using a game-based approach and a set of design challenges that are given to the students. In order to mimic industrial collaboration, they work in teams of up to six participants and are given the main development target to design a remote-controlled robot that can manipulate a specified object. By setting this clear goal without a given solution path, a constricted time-frame and limited maximal cost, the students are subjected to similar boundary conditions as in the real world. They must follow the methodical approach steps by specifying requirements, conceptualizing their ideas, drafting, designing, manufacturing and building a prototype using rapid prototyping. At the end of the course, the prototypes will be entered into a contest against the other teams. The complete design process is accompanied by theoretical input via lectures which is immediately transferred by the students to their own design problem in practical sessions. To increase motivation in these sessions, a playful learning approach has been chosen, i.e. designing the first concepts is supported by using lego construction kits. After each challenge, mandatory online quizzes help to deepen the acquired knowledge of the students and badges are awarded to those who complete a quiz, resulting in higher motivation and a level-up on a fictional leaderboard. The final contest is held in presence and involves all teams with their functional prototypes that now need to contest against each other. Prices for the best mechanical design, the most innovative approach and for the winner of the robotic contest are awarded. Each robot design gets evaluated with regards to the specified requirements and partial grades are derived from the results. This paper concludes with a critical review of the proposed framework, the game-based approach for the designed prototypes, the reality of the boundary conditions, the problems that occurred during the design and manufacturing process, the experiences and feedback of the students and the effectiveness of their collaboration as well as a discussion of the potential transfer to other educational areas.

Keywords: design challenges, game-based learning, playful learning, methodical framework, mechatronics, student assessment, constructive alignment

Procedia PDF Downloads 67
43 Control of Belts for Classification of Geometric Figures by Artificial Vision

Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez

Abstract:

The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.

Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB

Procedia PDF Downloads 379
42 Ascribing Identities and Othering: A Multimodal Discourse Analysis of a BBC Documentary on YouTube

Authors: Shomaila Sadaf, Margarethe Olbertz-Siitonen

Abstract:

This study looks at identity and othering in discourses around sensitive issues in social media. More specifically, the study explores the multimodal resources and narratives through which the other is formed, and identities are ascribed in online spaces. As an integral part of social life, media spaces have become an important site for negotiating and ascribing identities. In line with recent research, identity is seen hereas constructions of belonging which go hand in hand with processes of in- and out-group formations that in some cases may lead to othering. Previous findings underline that identities are neither fixed nor limited but rather contextual, intersectional, and interactively achieved. The goal of this study is to explore and develop an understanding of how people co-construct the ‘other’ and ascribe certain identities in social media using multiple modes. In the beginning of the year 2018, the British government decided to include relationships, sexual orientation, and sex education into the curriculum of state funded primary schools. However, the addition of information related to LGBTQ+in the curriculum has been met with resistance, particularly from religious parents.For example, the British Muslim community has voiced their concerns and protested against the actions taken by the British government. YouTube has been used by news companies to air video stories covering the protest and narratives of the protestors along with the position ofschool officials. The analysis centers on a YouTube video dealing with the protest ofa local group of parents against the addition of information about LGBTQ+ in the curriculum in the UK. The video was posted in 2019. By the time of this study, the videos had approximately 169,000 views andaround 6000 comments. In deference to multimodal nature of YouTube videos, this study utilizes multimodal discourse analysis as a method of choice. The study is still ongoing and therefore has not yet yielded any final results. However, the initial analysis indicates a hierarchy of ascribing identities in the data. Drawing on multimodal resources, the media works with social categorizations throughout the documentary, presenting and classifying involved conflicting parties in the light of their own visible and audible identifications. The protesters can be seen to construct a strong group identity as Muslim parents (e.g., clothing and reference to shared values). While the video appears to be designed as a documentary that puts forward facts, the media does not seem to succeed in taking a neutral position consistently throughout the video. At times, the use of images, soundsand language contributes to the formation of “us” vs. “them”, where the audience is implicitly encouraged to pick a side. Only towards the end of the documentary this problematic opposition is addressed and critically reflected through an expert interview that is – interestingly – visually located outside the previously presented ‘battlefield’. This study contributes to the growing understanding of the discursive construction of the ‘other’ in social media. Videos available online are a rich source for examining how the different social actors ascribe multiple identities and form the other.

Keywords: identity, multimodal discourse analysis, othering, youtube

Procedia PDF Downloads 114
41 Assessment of Physical Learning Environments in ECE: Interdisciplinary and Multivocal Innovation for Chilean Kindergartens

Authors: Cynthia Adlerstein

Abstract:

Physical learning environment (PLE) has been considered, after family and educators, as the third teacher. There have been conflicting and converging viewpoints on the role of the physical dimensions of places to learn, in facilitating educational innovation and quality. Despite the different approaches, PLE has been widely recognized as a key factor in the quality of the learning experience , and in the levels of learning achievement in ECE . The conceptual frameworks of the field assume that PLE consists of a complex web of factors that shape the overall conditions for learning, and that much more interdisciplinary and complementary methodologies of research and development are required. Although the relevance of PLE attracts a broad international consensus, in Chile it remains under-researched and weakly regulated by public policy. Gaining deeper contextual understanding and more thoughtfully-designed recommendations require the use of innovative assessment tools that cross cultural and disciplinary boundaries to produce new hybrid approaches and improvements. When considering a PLE-based change process for ECE improvement, a central question is what dimensions, variables and indicators could allow a comprehensive assessment of PLE in Chilean kindergartens? Based on a grounded theory social justice inquiry, we adopted a mixed method design, that enabled a multivocal and interdisciplinary construction of data. By using in-depth interviews, discussion groups, questionnaires, and documental analysis, we elicited the PLE discourses of politicians, early childhood practitioners, experts in architectural design and ergonomics, ECE stakeholders, and 3 to 5 year olds. A constant comparison method enabled the construction of the dimensions, variables and indicators through which PLE assessment is possible. Subsequently, the instrument was applied in a sample of 125 early childhood classrooms, to test reliability (internal consistency) and validity (content and construct). As a result, an interdisciplinary and multivocal tool for assessing physical learning environments was constructed and validated, for Chilean kindergartens. The tool is structured upon 7 dimensions (wellbeing, flexible, empowerment, inclusiveness, symbolically meaningful, pedagogically intentioned, institutional management) 19 variables and 105 indicators that are assessed through observation and registration on a mobile app. The overall reliability of the instrument is .938 while the consistency of each dimension varies between .773 (inclusive) and .946 (symbolically meaningful). The validation process through expert opinion and factorial analysis (chi-square test) has shown that the dimensions of the assessment tool reflect the factors of physical learning environments. The constructed assessment tool for kindergartens highlights the significance of the physical environment in early childhood educational settings. The relevance of the instrument relies in its interdisciplinary approach to PLE and in its capability to guide innovative learning environments, based on educational habitability. Though further analysis are required for concurrent validation and standardization, the tool has been considered by practitioners and ECE stakeholders as an intuitive, accessible and remarkable instrument to arise awareness on PLE and on equitable distribution of learning opportunities.

Keywords: Chilean kindergartens, early childhood education, physical learning environment, third teacher

Procedia PDF Downloads 358
40 Psychogeographic Analysis of Spatial Appropriation within Walking Practice: The City Centre versus University Campus in the Case of Van, Turkey

Authors: Yasemin Ilkay

Abstract:

Urban spatial pattern interacts with the minds and bodies of citizens and influences their perception and attitudes, which leads to a two-folded map of the same space: physical and Psychogeographic maps. Psychogeography is a field of inquiry (rooted in literature and fiction) investigating how the environment affects the feelings and behaviors of individuals. This term was posed by Situationist International Movement in the 1950s by Guy Debord; in the course of time, the artistic framework evolved into a political issue, especially with the term Dérive, which indicates ‘deviation’ and ‘resistance’ to the existing spatial reality. The term Dérive appeared on the track of Flânéur after one hundred years; and turned out to be a political tool to transform everyday urban life. The three main concepts of psychogeography [walking, dérive, and palimpsest] construct the epistemological framework for a psychogeographic spatial analysis. Mental representations investigating this framework would provide a designer to capture the invisible layers of the gap between ‘how a space is conceived’ and ‘how the same space is perceived and experienced.’ This gap is a neglected but critical issue to discuss in the planning discipline, and psychogeography provides methodological inputs to cover the interrelation among top-down designs of urban patterning and bottom-up reproductions of ‘the soul’ of urban space at the intersection of geography and psychology. City centers and university campuses exemplify opposite poles of spatial organization and walking practice, which may result in differentiated spatial appropriation forms. There is a traditional city center in Van, located at the core of the city with a dense population and several activities, but not connected to Van Lake, which is the largest lake in the country. On the other hand, the university campus is located at the periphery, and although it has a promenade along the lake’s coast and a regional hospital, it presents a limited walking experience with ambiguous forms of spatial appropriation. The city center draws a vivid urban everyday life; however, the campus presents a relatively natural life far away from the center. This paper aims to reveal the differentiated psychogeographic maps of spatial appropriation at the city center vs. the university campus, which is located at the periphery of the city and along the coast of the largest lake in Turkey. The main question of the paper is, “how do the psychogeographic maps of spatial appropriation differentiate at the city center and university campus in Van within the walking experience with reference to the two-folded map assumption.” The experiential maps of a core group of 15 planning students will be created with the techniques of mental mapping, photographing, and narratives through attentive walks conducted together on selected routes; in addition to these attentive walks, 30 more in-depth interviews will be conducted by the core group. The narrative of psychogeographic mapping of spatial appropriation at the two spatial poles would display the conflicting soul of the city with reference to sub-behavioural regions of walking, differentiated forms of derive and layers of palimpsest.

Keywords: attentive walk, body, cognitive geography, derive, experiential maps, psychogeography, Van, Turkey

Procedia PDF Downloads 80
39 Investigating the Neural Heterogeneity of Developmental Dyscalculia

Authors: Fengjuan Wang, Azilawati Jamaludin

Abstract:

Developmental Dyscalculia (DD) is defined as a particular learning difficulty with continuous challenges in learning requisite math skills that cannot be explained by intellectual disability or educational deprivation. Recent studies have increasingly recognized that DD is a heterogeneous, instead of monolithic, learning disorder with not only cognitive and behavioral deficits but so too neural dysfunction. In recent years, neuroimaging studies employed group comparison to explore the neural underpinnings of DD, which contradicted the heterogenous nature of DD and may obfuscate critical individual differences. This research aimed to investigate the neural heterogeneity of DD using case studies with functional near-infrared spectroscopy (fNIRS). A total of 54 aged 6-7 years old of children participated in this study, comprising two comprehensive cognitive assessments, an 8-minute resting state, and an 8-minute one-digit addition task. Nine children met the criteria of DD and scored at or below 85 (i.e., the 16th percentile) on the Mathematics or Math Fluency subtest of the Wechsler Individual Achievement Test, Third Edition (WIAT-III) (both subtest scores were 90 and below). The remaining 45 children formed the typically developing (TD) group. Resting-state data and brain activation in the inferior frontal gyrus (IFG), superior frontal gyrus (SFG), and intraparietal sulcus (IPS) were collected for comparison between each case and the TD group. Graph theory was used to analyze the brain network under the resting state. This theory represents the brain network as a set of nodes--brain regions—and edges—pairwise interactions across areas to reveal the architectural organizations of the nervous network. Next, a single-case methodology developed by Crawford et al. in 2010 was used to compare each case’s brain network indicators and brain activation against 45 TD children’s average data. Results showed that three out of the nine DD children displayed significant deviation from TD children’s brain indicators. Case 1 had inefficient nodal network properties. Case 2 showed inefficient brain network properties and weaker activation in the IFG and IPS areas. Case 3 displayed inefficient brain network properties with no differences in activation patterns. As a rise above, the present study was able to distill differences in architectural organizations and brain activation of DD vis-à-vis TD children using fNIRS and single-case methodology. Although DD is regarded as a heterogeneous learning difficulty, it is noted that all three cases showed lower nodal efficiency in the brain network, which may be one of the neural sources of DD. Importantly, although the current “brain norm” established for the 45 children is tentative, the results from this study provide insights not only for future work in “developmental brain norm” with reliable brain indicators but so too the viability of single-case methodology, which could be used to detect differential brain indicators of DD children for early detection and interventions.

Keywords: brain activation, brain network, case study, developmental dyscalculia, functional near-infrared spectroscopy, graph theory, neural heterogeneity

Procedia PDF Downloads 53
38 Conservation Detection Dogs to Protect Europe's Native Biodiversity from Invasive Species

Authors: Helga Heylen

Abstract:

With dogs saving wildlife in New Zealand since 1890 and governments in Africa, Australia and Canada trusting them to give the best results, Conservation Dogs Ireland want to introduce more detection dogs to protect Europe's native wildlife. Conservation detection dogs are fast, portable and endlessly trainable. They are a cost-effective, highly sensitive and non-invasive way to detect protected and invasive species and wildlife disease. Conservation dogs find targets up to 40 times faster than any other method. They give results instantly, with near-perfect accuracy. They can search for multiple targets simultaneously, with no reduction in efficacy The European Red List indicates the decline in biodiversity has been most rapid in the past 50 years, and the risk of extinction never higher. Just two examples of major threats dogs are trained to tackle are: (I)Japanese Knotweed (Fallopia Japonica), not only a serious threat to ecosystems, crops, structures like bridges and roads - it can wipe out the entire value of a house. The property industry and homeowners are only just waking up to the full extent of the nightmare. When those working in construction on the roads move topsoil with a trace of Japanese Knotweed, it suffices to start a new colony. Japanese Knotweed grows up to 7cm a day. It can stay dormant and resprout after 20 years. In the UK, the cost of removing Japanese Knotweed from the London Olympic site in 2012 was around £70m (€83m). UK banks already no longer lend on a house that has Japanese Knotweed on-site. Legally, landowners are now obliged to excavate Japanese Knotweed and have it removed to a landfill. More and more, we see Japanese Knotweed grow where a new house has been constructed, and topsoil has been brought in. Conservation dogs are trained to detect small fragments of any part of the plant on sites and in topsoil. (II)Zebra mussels (Dreissena Polymorpha) are a threat to many waterways in the world. They colonize rivers, canals, docks, lakes, reservoirs, water pipes and cooling systems. They live up to 3 years and will release up to one million eggs each year. Zebra mussels attach to surfaces like rocks, anchors, boat hulls, intake pipes and boat engines. They cause changes in nutrient cycles, reduction of plankton and increased plant growth around lake edges, leading to the decline of Europe's native mussel and fish populations. There is no solution, only costly measures to keep it at bay. With many interconnected networks of waterways, they have spread uncontrollably. Conservation detection dogs detect the Zebra mussel from its early larvae stage, which is still invisible to the human eye. Detection dogs are more thorough and cost-effective than any other conservation method, and will greatly complement and speed up the work of biologists, surveyors, developers, ecologists and researchers.

Keywords: native biodiversity, conservation detection dogs, invasive species, Japanese Knotweed, zebra mussel

Procedia PDF Downloads 197
37 Momentum Profits and Investor Behavior

Authors: Aditya Sharma

Abstract:

Profits earned from relative strength strategy of zero-cost portfolio i.e. taking long position in winner stocks and short position in loser stocks from recent past are termed as momentum profits. In recent times, there has been lot of controversy and concern about sources of momentum profits, since the existence of these profits acts as an evidence of earning non-normal returns from publicly available information directly contradicting Efficient Market Hypothesis. Literature review reveals conflicting theories and differing evidences on sources of momentum profits. This paper aims at re-examining the sources of momentum profits in Indian capital markets. The study focuses on assessing the effect of fundamental as well as behavioral sources in order to understand the role of investor behavior in stock returns and suggest (if any) improvements to existing behavioral asset pricing models. This Paper adopts calendar time methodology to calculate momentum profits for 6 different strategies with and without skipping a month between ranking and holding period. For each J/K strategy, under this methodology, at the beginning of each month t stocks are ranked on past j month’s average returns and sorted in descending order. Stocks in upper decile are termed winners and bottom decile as losers. After ranking long and short positions are taken in winner and loser stocks respectively and both portfolios are held for next k months, in such manner that at any given point of time we have K overlapping long and short portfolios each, ranked from t-1 month to t-K month. At the end of period, returns of both long and short portfolios are calculated by taking equally weighted average across all months. Long minus short returns (LMS) are momentum profits for each strategy. Post testing for momentum profits, to study the role market risk plays in momentum profits, CAPM and Fama French three factor model adjusted LMS returns are calculated. In the final phase of studying sources, decomposing methodology has been used for breaking up the profits into unconditional means, serial correlations, and cross-serial correlations. This methodology is unbiased, can be used with the decile-based methodology and helps to test the effect of behavioral and fundamental sources altogether. From all the analysis, it was found that momentum profits do exist in Indian capital markets with market risk playing little role in defining them. Also, it was observed that though momentum profits have multiple sources (risk, serial correlations, and cross-serial correlations), cross-serial correlations plays a major role in defining these profits. The study revealed that momentum profits do have multiple sources however, cross-serial correlations i.e. the effect of returns of other stocks play a major role. This means that in addition to studying the investors` reactions to the information of the same firm it is also important to study how they react to the information of other firms. The analysis confirms that investor behavior does play an important role in stock returns and incorporating both the aspects of investors’ reactions in behavioral asset pricing models help make then better.

Keywords: investor behavior, momentum effect, sources of momentum, stock returns

Procedia PDF Downloads 305
36 Evaluation of Toxicity of Cerium Oxide on Zebrafish Developmental Stages

Authors: Roberta Pecoraro, Elena Maria Scalisi

Abstract:

Engineered Nanoparticles (ENPs) and Nanomaterials (ENMs) concern an active research area and a sector in full expansion. They have physical-chemical characteristics and small size that improve their performance compared to common materials. Due to the increase in their production and their subsequent release into the environment, new strategies are emerging to assess risk of nanomaterials. NPs can be released into the environment through aquatic systems by human activities and exert toxicity on living organisms. We evaluated the potential toxic effect of cerium oxide (CeO2) nanoparticles because it’s used in different fields due to its peculiar properties. In order to assess nanoparticles toxicity, Fish Embryo Toxicity (FET) test was performed. Powders of CeO2 NPs supplied by the CNR-IMM of Catania are indicated as CeO2 type 1 (as-prepared) and CeO2 type 2 (modified), while CeO2 type 3 (commercial) is supplied by Sigma-Aldrich. Starting from a stock solution (0.001g/10 ml dilution water) of each type of CeO2 NPs, the other concentration solutions were obtained adding 1 ml of the stock solution to 9 ml of dilution water, leading to three different solutions of concentration (10-4, 10-5, 10-6 g/ml). All the solutions have been sonicated to avoid natural tendency of NPs to aggregate and sediment. FET test was performed according to the OECD guidelines for testing chemicals using our internal protocol procedure. A number of eight selected fertilized eggs were placed in each becher filled with 5 ml of each concentration of the three types of CeO2 NPs; control samples were incubated only with dilution water. Replication was performed for each concentration. During the exposure period, we observed four endpoints (embryo coagulation, lack of formation of somites, failure to lift the yolk bag, no heartbeat) by a stereomicroscope every 24 hours. Immunohistochemical analysis on treated larvae was performed to evaluate the expression of metallothioneins (MTs), Heat Shock Proteins 70 (HSP70) and 7-ethoxyresorufin-O-diethylase (EROD). Our results have not shown evident alterations on embryonic development because all embryos completed the development and the hatching of the eggs, started around the 48th hour after exposure, took place within the last observation at 72 hours. A good reactivity, both in the embryos and in the newly hatched larvae, was found. The presence of heartbeat has also been observed in embryos with reduced mobility confirming their viability. A higher expression of EROD biomarker was observed in the larvae exposed to the three types of CeO2, showing a clear difference with the control. A weak positivity was found for MTs biomarker in treated larvae as well as in the control. HSP70 are expressed homogeneously in all the type of nanoparticles tested but not too much greater than control. Our results are in agreement with other studies in the literature, in which the exposure of Danio rerio larvae to other metal oxide nanoparticles does not show adverse effects on survival and hatching time. Further studies are necessary to clarify the role of these NPs and also to solve conflicting opinions.

Keywords: Danio rerio, endpoints, fish embryo toxicity test, metallic nanoparticles

Procedia PDF Downloads 134
35 Preparation, Characterization and Photocatalytic Activity of a New Noble Metal Modified TiO2@SrTiO3 and SrTiO3 Photocatalysts

Authors: Ewelina Grabowska, Martyna Marchelek

Abstract:

Among the various semiconductors, nanosized TiO2 has been widely studied due to its high photosensitivity, low cost, low toxicity, and good chemical and thermal stability. However, there are two main drawbacks to the practical application of pure TiO2 films. One is that TiO2 can be induced only by ultraviolet (UV) light due to its intrinsic wide bandgap (3.2 eV for anatase and 3.0 eV for rutile), which limits its practical efficiency for solar energy utilization since UV light makes up only 4-5% of the solar spectrum. The other is that a high electron-hole recombination rate will reduce the photoelectric conversion efficiency of TiO2. In order to overcome the above drawbacks and modify the electronic structure of TiO2, some semiconductors (eg. CdS, ZnO, PbS, Cu2O, Bi2S3, and CdSe) have been used to prepare coupled TiO2 composites, for improving their charge separation efficiency and extending the photoresponse into the visible region. It has been proved that the fabrication of p-n heterostructures by combining n-type TiO2 with p-type semiconductors is an effective way to improve the photoelectric conversion efficiency of TiO2. SrTiO3 is a good candidate for coupling TiO2 and improving the photocatalytic performance of the photocatalyst because its conduction band edge is more negative than TiO2. Due to the potential differences between the band edges of these two semiconductors, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Conversely, the photogenerated electrons transfer from the conduction band of SrTiO3 to that of TiO2. Then the photogenerated charge carriers can be efficiently separated by these processes, resulting in the enhancement of the photocatalytic property in the photocatalyst. Additionally, one of the methods for improving photocatalyst performance is addition of nanoparticles containing one or two noble metals (Pt, Au, Ag and Pd) deposited on semiconductor surface. The mechanisms were proposed as (1) the surface plasmon resonance of noble metal particles is excited by visible light, facilitating the excitation of the surface electron and interfacial electron transfer (2) some energy levels can be produced in the band gap of TiO2 by the dispersion of noble metal nanoparticles in the TiO2 matrix; (3) noble metal nanoparticles deposited on TiO2 act as electron traps, enhancing the electron–hole separation. In view of this, we recently obtained series of TiO2@SrTiO3 and SrTiO3 photocatalysts loaded with noble metal NPs. using photodeposition method. The M- TiO2@SrTiO3 and M-SrTiO3 photocatalysts (M= Rh, Rt, Pt) were studied for photodegradation of phenol in aqueous phase under UV-Vis and visible irradiation. Moreover, in the second part of our research hydroxyl radical formations were investigated. Fluorescence of irradiated coumarin solution was used as a method of ˙OH radical detection. Coumarin readily reacts with generated hydroxyl radicals forming hydroxycoumarins. Although the major hydroxylation product is 5-hydroxycoumarin, only 7-hydroxyproduct of coumarin hydroxylation emits fluorescent light. Thus, this method was used only for hydroxyl radical detection, but not for determining concentration of hydroxyl radicals.

Keywords: composites TiO2, SrTiO3, photocatalysis, phenol degradation

Procedia PDF Downloads 222
34 A Qualitative Anthropological Analysis of Competing Health Perceptions in Chagas-Related Consultations in Non-Endemic Geneva

Authors: Marina Gold, Yves Jackson, David Parrat

Abstract:

The high predominance of Latin American migrants in Geneva from countries where Chagas disease is endemic (Bolivia, Brazil, Argentina, Colombia) is increasing the incidence of chronic Chagas-related problems, especially cardiovascular complications. The precarious migratory status of what are mostly undocumented migrants complicates access to health and affects patients’ and doctors’ health perceptions regarding screening, treatment and monitoring of Chagas-related health concerns. This project results from a 3 year collaboration between the Geneva University Hospital and the NGO Mundo Sano to understand the following questions: 1) how do Latin American migrants perceive their health? 2) What do they understand from Chagas disease? 3) Are patients’ and doctors’ health perceptions similar or do they have competing agendas? This paper aims to present the results of a long-term study that interrogates health perceptions among Latin American migrants in Geneva. The first phase consisted in completing surveys at three community screening events (2016, 2017. 2018), and the results of these surveys reveal the subordination of the importance of health to that of having met economic family obligation. That is, health is important only when it becomes an impediment to economic gain. The contradictory result emerged that people are aware of the importance of health prevention in order to ensure long-term health, but they do not always have agency over their life-style habits (healthy food, regular exercise, emotional stability). The second phase of the research collected open-ended interviews with selected participants, in order to explore in more detail how Latin American migrants deal with Chagas in a different socio-political and economic context to that of endemic countries. These interviews (5 in total) reveal mixed methods of managing health: social networks, access to health care transnationally (in Geneva, Spain and back in their home country), and different valuations of health problems in each situation. The third phase consisted in observations of doctor-patient consultations and further extended interviews with patients to determine doctor/patient health perceptions around Chagas disease. This phase is ongoing, but it has yielded preliminarily observations regarding the expectations that patients’ have of doctors, and the understanding of doctors’ to patients’ complex situations. Positive and complementary health perceptions include patients’ feeling that doctors in Geneva are more understanding, more knowledgeable and less racist than those in their home country, who do not provide detailed information about Chagas or its treatment and discriminate against them for being indigenous or from poor rural areas, enabling a better communication between doctors and patients. Possible conflicting health perceptions include patients addressing their health concerns more holistically and encountering the specialist’s limitations to only treating one health concern, given time limitations and lack of competition with their colleagues (the general practitioner that referred the patient, for example). The implications of this study extend the case of Chagas disease in Geneva and is relevant for all chronic concerns and migratory contexts of precarity.

Keywords: chagas disease, health perceptions, Latin American Migrants, non-endemic countries

Procedia PDF Downloads 120