Search results for: active power tuning
449 Geomechanics Properties of Tuzluca (Eastern. Turkey) Bedded Rock Salt and Geotechnical Safety
Authors: Mehmet Salih Bayraktutan
Abstract:
Geomechanical properties of Rock Salt Deposits in Tuzluca Salt Mine Area (Eastern Turkey) are studied for modeling the operation- excavation strategy. The purpose of this research focused on calculating the critical value of span height- which will meet the safety requirements. The Mine Site Tuzluca Hills consist of alternating parallel bedding of Salt ( NaCl ) and Gypsum ( CaS04 + 2 H20) rocks. Rock Salt beds are more resistant than narrow Gypsum interlayers. Rock Salt beds formed almost 97 percent of the total height of the Hill. Therefore, the geotechnical safety of Galleries depends on the mechanical criteria of Rock Salt Cores. General deposition of Tuzluca Basin was finally completed by Tuzluca Evaporites, as for the uppermost stratigraphic unit. They are currently running mining operations performed by classic mechanical excavation, room and pillar method. Rooms and Pillars are currently experiencing an initial stage of fracturing in places. Geotechnical safety of the whole mining area evaluated by Rock Mass Rating (RMR), Rock Quality Designation (RQD) spacing of joints, and the interaction of groundwater and fracture system. In general, bedded rock salt Show large lateral deformation capacity (while deformation modulus stays in relative small values, here E= 9.86 GPa). In such litho-stratigraphic environments, creep is a critical mechanism in failure. Rock Salt creep rate in steady-state is greater than interbedding layers. Under long-lasted compressive stresses, creep may cause shear displacements, partly using bedding planes. Eventually, steady-state creep in time returns to accelerated stages. Uniaxial compression creep tests on specimens were performed to have an idea of rock salt strength. To give an idea, on Rock Salt cores, average axial strength and strain are found as 18 - 24 MPa and 0.43-0.45 %, respectively. Uniaxial Compressive strength of 26- 32 MPa, from bedded rock salt cores. Elastic modulus is comparatively low, but lateral deformation of the rock salt is high under the uniaxial compression stress state. Poisson ratio = 0.44, break load = 156 kN, cohesion c= 12.8 kg/cm2, specific gravity SG=2.17 gr/cm3. Fracture System; spacing of fractures, joints, faults, offsets are evaluated under acting geodynamic mechanism. Two sand beds, each 4-6 m thick, exist near to upper level and at the top of the evaporating sequence. They act as aquifers and keep infiltrated water on top for a long duration, which may result in the failure of roofs or pillars. Two major active seismic ( N30W and N70E ) striking Fault Planes and parallel fracture strands have seismically triggered moderate risk of structural deformation of rock salt bedding sequence. Earthquakes and Floods are two prevailing sources of geohazards in this region—the seismotectonic activity of the Mine Site based on the crossing framework of Kagizman Faults and Igdir Faults. Dominant Hazard Risk sources include; a) Weak mechanical properties of rock salt, gypsum, anhydrite beds-creep. b) Physical discontinuities cutting across the thick parallel layers of Evaporite Mass, c) Intercalated beds of weak cemented or loose sand, clayey sandy sediments. On the other hand, absorbing the effects of salt-gyps parallel bedded deposits on seismic wave amplitudes has a reducing effect on the Rock Mass.Keywords: bedded rock salt, creep, failure mechanism, geotechnical safety
Procedia PDF Downloads 188448 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 238447 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 181446 A Proposal for an Excessivist Social Welfare Ordering
Authors: V. De Sandi
Abstract:
In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering
Procedia PDF Downloads 61445 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications
Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian
Abstract:
The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.Keywords: smart food packaging, supply chain management, food waste, radio frequency identification
Procedia PDF Downloads 112444 Gender-Transformative Education: A Pathway to Nourishing and Evolving Gender Equality in the Higher Education of Iran
Authors: Sepideh Mirzaee
Abstract:
Gender-transformative (G-TE) education is a challenging concept in the field of education and it is a matter of hot debate in the contemporary world. Paulo Freire as the prominent advocate of transformative education considers it as an alternative to conventional banking model of education. Besides, a more inclusive concept has been introduced, namely, G-TE, as an unbiased education fostering an environment of gender justice. As its main tenet, G-TE eliminates obstacles to education and improves social shifts. A plethora of contemporary research indicates that G-TE could completely revolutionize education systems by displacing inequalities and changing gender stereotypes. Despite significant progress in female education and its effects on gender equality in Iran, challenges persist. There are some deficiencies regarding gender disparities in the society and, education, specifically. As an example, the number of women with university degrees is on the rise; thus, there will be an increasing demand for employment in the society by them. Instead, many job opportunities remain occupied by men and it is seen as intolerable for the society to assign such occupations to women. In fact, Iran is regarded as a patriarchal society where educational contexts can play a critical role to assign gender ideology to its learners. Thus, such gender ideologies in the education can become the prevailing ideologies in the entire society. Therefore, improving education in this regard, can lead to a significant change in a society subsequently influencing the status of women not only within their own country but also on a global scale. Notably, higher education plays a vital role in this empowerment and social change. Particularly higher education can have a crucial part in imparting gender neutral ideologies to its learners and bringing about substantial change. It has the potential to alleviate the detrimental effects of gender inequalities. Therefore, this study aims to conceptualize the pivotal role of G-TE and its potential power in developing gender equality within the higher educational system of Iran presented within a theoretical framework. The study emphasizes the necessity of stablishing a theoretical grounding for citizenship, and transformative education while distinguishing gender related issues including gender equality, equity and parity. This theoretical foundation will shed lights on the decisions made by policy-makers, syllabus designers, material developers, and specifically professors and students. By doing so, they will be able to promote and implement gender equality recognizing the determinants, obstacles, and consequences of sustaining gender-transformative approaches in their classes within the Iranian higher education system. The expected outcomes include the eradication of gender inequality, transformation of gender stereotypes and provision of equal opportunities for both males and females in education.Keywords: citizenship education, gender inequality, higher education, patriarchal society, transformative education
Procedia PDF Downloads 64443 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study
Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo
Abstract:
5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test
Procedia PDF Downloads 128442 Social Inequality and Inclusion Policies in India: Lessons Learned and the Way Forward
Authors: Usharani Rathinam
Abstract:
Although policies directing inclusion of marginalized were in effect, majority of chronically impoverished in India belonged to schedule caste and schedule tribes. Also, taking into account that poverty is gendered; destitute women belonged to lower social order whose need is not largely highlighted at policy level. This paper discusses on social relations poverty which highlights on how social order that existed structurally in the society can perpetuate chronic poverty, followed by a critical review on social inclusion policies of India, its merits and demerits in addressing chronic poverty. Multiple case study design is utilized to address this concern in four districts of India; Jhansi, Tikamgarh, Cuddalore and Anantapur. These four districts were selected by purposive sampling based on the criteria; the district should either be categorized as a backward district or should have a history of high poverty rate. Qualitative methods including eighty in-depth interviews, six focus group discussions, six social mapping procedures and three key informant interviews were conducted in 2011, at each of the locations. Analysis of the data revealed that irrespective of gender, schedule castes and schedule tribe participants were found to be chronically poor in all districts. Caste based discrimination is exhibited at both micro and macro levels; village and institutional levels. At village level, lower caste respondents had lesser access to public resources. Also, within institutional settings, due to confiscation, unequal access to resources is noticed, especially in fund distribution. This study found that half of the budget intended for schedule caste and schedule tribes were confiscated by upper caste administrative staffs. This implies that power based on social hierarchy marginalize lower caste participants from accessing better economic, social, and political benefits, that had led them to suffer long term poverty. This study also explored the traditional ties between caste, social structure and bonded labour as a cause of long-term poverty. Though equal access is being emphasized in constitutional rights, issues at micro level have not been reflected in formulation of these rights. Therefore, it is significant for a policy to consider the structural complexity and then focus on issues such as equal distribution of assets and infrastructural facilities that will reduce exclusion and foster long-term security in areas such as employment, markets and public distribution.Keywords: caste, inclusion policies, India, social order
Procedia PDF Downloads 205441 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes
Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil
Abstract:
Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles
Procedia PDF Downloads 288440 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 176439 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 286438 Inhibitory Effects of Crocin from Crocus sativus L. on Cell Proliferation of a Medulloblastoma Human Cell Line
Authors: Kyriaki Hatziagapiou, Eleni Kakouri, Konstantinos Bethanis, Alexandra Nikola, Eleni Koniari, Charalabos Kanakis, Elias Christoforides, George Lambrou, Petros Tarantilis
Abstract:
Medulloblastoma is a highly invasive tumour, as it tends to disseminate throughout the central nervous system early in its course. Despite the high 5-year-survival rate, a significant number of patients demonstrate serious long- or short-term sequelae (e.g., myelosuppression, endocrine dysfunction, cardiotoxicity, neurological deficits and cognitive impairment) and higher mortality rates, unrelated to the initial malignancy itself but rather to the aggressive treatment. A strong rationale exists for the use of Crocus sativus L (saffron) and its bioactive constituents (crocin, crocetin, safranal) as pharmaceutical agents, as they exert significant health-promoting properties. Crocins are water soluble carotenoids. Unlike other carotenoids, crocins are highly water-soluble compounds, with relatively low toxicity as they are not stored in adipose and liver tissues. Crocins have attracted wide attention as promising anti-cancer agents, due to their antioxidant, anti-inflammatory, and immunomodulatory effects, interference with transduction pathways implicated in tumorigenesis, angiogenesis, and metastasis (disruption of mitotic spindle assembly, inhibition of DNA topoisomerases, cell-cycle arrest, apoptosis or cell differentiation) and sensitization of cancer cells to radiotherapy and chemotherapy. The current research aimed to study the potential cytotoxic effect of crocins on TE671 medulloblastoma cell line, which may be useful in the optimization of existing and development of new therapeutic strategies. Crocins were extracted from stigmas of saffron in ultrasonic bath, using petroleum-ether, diethylether and methanol 70%v/v as solvents and the final extract was lyophilized. Identification of crocins according to high-performance liquid chromatography (HPLC) analysis was determined comparing the UV-vis spectra and the retention time (tR) of the peaks with literature data. For the biological assays crocin was diluted to nuclease and protease free water. TE671 cells were incubated with a range of concentrations of crocins (16, 8, 4, 2, 1, 0.5 and 0.25 mg/ml) for 24, 48, 72 and 96 hours. Analysis of cell viability after incubation with crocins was performed with Alamar Blue viability assay. The active ingredient of Alamar Blue, resazurin, is a blue, nontoxic, cell permeable compound virtually nonfluorescent. Upon entering cells, resazurin is reduced to a pink and fluorescent molecule, resorufin. Viable cells continuously convert resazurin to resorufin, generating a quantitative measure of viability. The colour of resorufin was quantified by measuring the absorbance of the solution at 600 nm with a spectrophotometer. HPLC analysis indicated that the most abundant crocins in our extract were trans-crocin-4 and trans-crocin-3. Crocins exerted significant cytotoxicity in a dose and time-dependent manner (p < 0.005 for exposed cells to any concentration at 48, 72 and 96 hours versus cells not exposed); as their concentration and time of exposure increased, the reduction of resazurin to resofurin decreased, indicating reduction in cell viability. IC50 values for each time point were calculated ~3.738, 1.725, 0.878 and 0.7566 mg/ml at 24, 48, 72 and 96 hours, respectively. The results of our study could afford the basis of research regarding the use of natural carotenoids as anticancer agents and the shift to targeted therapy with higher efficacy and limited toxicity. Acknowledgements: The research was funded by Fellowships of Excellence for Postgraduate Studies IKY-Siemens Programme.Keywords: crocetin, crocin, medulloblastoma, saffron
Procedia PDF Downloads 215437 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars
Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea
Abstract:
Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag
Procedia PDF Downloads 325436 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 50435 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 111434 Men of Congress in Today’s Brazil: Ethnographic Notes on Neoliberal Masculinities in Support of Bolsonaro
Authors: Joao Vicente Pereira Fernandez
Abstract:
In the context of a democratic crisis, a new wave of authoritarianism prompts domineering male figures to leadership posts worldwide. Although the gendered aspect of this phenomenon has been reasonably documented, recent studies have focused on high-level commanding posts, such as those of president and prime-minister, leaving other positions of political power with limited attention. This natural focus of investigation, however powerful, seems to have restricted our understanding of the phenomenon by precluding a more thorough inquiry of its gendered aspects and its consequences for political representation as a whole. Trying to fill this gap, in recent research, we examined the election results of Jair Bolsonaro’s party for the Legislative Branch in 2018. We found that the party's proportion of non-male representatives was on average, showing it provided reasonable access of women to the legislature in a comparative perspective. However, and perhaps more intuitively, we also found that the elected members of Bolsonaro’s party performed very gendered roles, which allowed us to draw the first lines of the representative profiles gathered around the new-right in Brazil. These results unveiled new horizons for further research, addressing topics that range from the role of women for the new-right on Brazilian institutional politics to the relations between these profiles of representatives, their agendas, and political and electoral strategies. This article aims to deepen the understanding of some of these profiles in order to lay the groundwork for the development of the second research agenda mentioned above. More specifically, it focuses on two out of the three profiles that were grasped predominantly, if not entirely, from masculine subjects during our last research, with the objective of portraying the masculinity standards mobilized and promoted by them. These profiles –the entrepreneur and the army man – were chosen to be developed due to their proximity to both liberal and authoritarian views, and, moreover, because they can possibly represent two facets of the new-right that were integrated in a certain way around Bolsonaro in 2018, but that can be reworked in the future. After a brief introduction of the literature on masculinity and politics in times of democratic crisis, we succinctly present the relevant results of our previous research and then describe these two profiles and their masculinities in detail. We adopt a combination of ethnography and discourse analysis, methods that allow us to make sense of the data we collected on our previous research as well as of the data gathered for this article: social media posts and interactions between the elected members that inspired these profiles and their supporters. Finally, we discuss our results, presenting our main argument on how these descriptions provide a further understanding of the gendered aspect of liberal authoritarianism, from where to better apprehend its political implications in Brazil.Keywords: Brazilian politics, gendered politics, masculinities, new-right
Procedia PDF Downloads 119433 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts
Authors: Antonieta Heyden Megale
Abstract:
With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.Keywords: bilingual schools, identity, interculturality, narrative
Procedia PDF Downloads 336432 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 84431 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 233430 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 52429 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer
Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang
Abstract:
In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide
Procedia PDF Downloads 315428 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 60427 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 346426 Violence against Women: A Study on the Aggressors' Profile
Authors: Giovana Privatte Maciera, Jair Izaías Kappann
Abstract:
Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women
Procedia PDF Downloads 333425 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives
Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes
Abstract:
The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system
Procedia PDF Downloads 118424 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics
Authors: Mark Wiebe
Abstract:
One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.Keywords: theology, philosophy of religion, metaphysics, philosophy
Procedia PDF Downloads 71423 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers
Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal
Abstract:
Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test
Procedia PDF Downloads 97422 Intelligent Control of Agricultural Farms, Gardens, Greenhouses, Livestock
Authors: Vahid Bairami Rad
Abstract:
The intelligentization of agricultural fields can control the temperature, humidity, and variables affecting the growth of agricultural products online and on a mobile phone or computer. Smarting agricultural fields and gardens is one of the best and best ways to optimize agricultural equipment and has a 100 percent direct effect on the growth of plants and agricultural products and farms. Smart farms are the topic that we are going to discuss today, the Internet of Things and artificial intelligence. Agriculture is becoming smarter every day. From large industrial operations to individuals growing organic produce locally, technology is at the forefront of reducing costs, improving results and ensuring optimal delivery to market. A key element to having a smart agriculture is the use of useful data. Modern farmers have more tools to collect intelligent data than in previous years. Data related to soil chemistry also allows people to make informed decisions about fertilizing farmland. Moisture meter sensors and accurate irrigation controllers have made the irrigation processes to be optimized and at the same time reduce the cost of water consumption. Drones can apply pesticides precisely on the desired point. Automated harvesting machines navigate crop fields based on position and capacity sensors. The list goes on. Almost any process related to agriculture can use sensors that collect data to optimize existing processes and make informed decisions. The Internet of Things (IoT) is at the center of this great transformation. Internet of Things hardware has grown and developed rapidly to provide low-cost sensors for people's needs. These sensors are embedded in IoT devices with a battery and can be evaluated over the years and have access to a low-power and cost-effective mobile network. IoT device management platforms have also evolved rapidly and can now be used securely and manage existing devices at scale. IoT cloud services also provide a set of application enablement services that can be easily used by developers and allow them to build application business logic. Focus on yourself. These development processes have created powerful and new applications in the field of Internet of Things, and these programs can be used in various industries such as agriculture and building smart farms. But the question is, what makes today's farms truly smart farms? Let us put this question in another way. When will the technologies associated with smart farms reach the point where the range of intelligence they provide can exceed the intelligence of experienced and professional farmers?Keywords: food security, IoT automation, wireless communication, hybrid lifestyle, arduino Uno
Procedia PDF Downloads 55421 Nuclear Near Misses and Their Learning for Healthcare
Authors: Nick Woodier, Iain Moppett
Abstract:
Background: It is estimated that one in ten patients admitted to hospital will suffer an adverse event in their care. While the majority of these will result in low harm, patients are being significantly harmed by the processes meant to help them. Healthcare, therefore, seeks to make improvements in patient safety by taking learning from other industries that are perceived to be more mature in their management of safety events. Of particular interest to healthcare are ‘near misses,’ those events that almost happened but for an intervention. Healthcare does not have any guidance as to how best to manage and learn from near misses to reduce the chances of harm to patients. The authors, as part of a larger study of near-miss management in healthcare, sought to learn from the UK nuclear sector to develop principles for how healthcare can identify, report, and learn from near misses to improve patient safety. The nuclear sector was chosen as an exemplar due to its status as an ultra-safe industry. Methods: A Grounded Theory (GT) methodology, augmented by a scoping review, was used. Data collection included interviews, scenario discussion, field notes, and the literature. The review protocol is accessible online. The GT aimed to develop theories about how nuclear manages near misses with a focus on defining them and clarifying how best to support reporting and analysis to extract learning. Near misses related to radiation release or exposure were focused on. Results: Eightnuclear interviews contributed to the GT across nuclear power, decommissioning, weapons, and propulsion. The scoping review identified 83 articles across a range of safety-critical industries, with only six focused on nuclear. The GT identified that nuclear has a particular focus on precursors and low-level events, with regulation supporting their management. Exploration of definitions led to the recognition of the importance of several interventions in a sequence of events, but that do not solely rely on humans as these cannot be assumed to be robust barriers. Regarding reporting and analysis, no consistent methods were identified, but for learning, the role of operating experience learning groups was identified as an exemplar. The safety culture across nuclear, however, was heard to vary, which undermined reporting of near misses and other safety events. Some parts of the industry described that their focus on near misses is new and that despite potential risks existing, progress to mitigate hazards is slow. Conclusions: Healthcare often sees ‘nuclear,’ as well as other ultra-safe industries such as ‘aviation,’ as homogenous. However, the findings here suggest significant differences in safety culture and maturity across various parts of the nuclear sector. Healthcare can take learning from some aspects of management of near misses in nuclear, such as how they are defined and how learning is shared through operating experience networks. However, healthcare also needs to recognise that variability exists across industries, and comparably, it may be more mature in some areas of safety.Keywords: culture, definitions, near miss, nuclear safety, patient safety
Procedia PDF Downloads 103420 Graphic Narratives: Representations of Refugeehood in the Form of Illustration
Authors: Pauline Blanchet
Abstract:
In a world where images are a prominent part of our daily lives and a way of absorbing information, the analysis of the representation of migration narratives is vital. This thesis raises questions concerning the power of illustrations, drawings and visual culture in order to represent the migration narratives in the age of Instagram. The rise of graphic novels and comics has come about in the last fifteen years, specifically regarding contemporary authors engaging with complex social issues such as migration and refugeehood. Due to this, refugee subjects are often in these narratives, whether they are autobiographical stories or whether the subject is included in the creative process. Growth in discourse around migration has been present in other art forms; in 2018, there has been dedicated exhibitions around migration such as Tania Bruguera at the TATE (2018-2019), ‘Journeys Drawn’ at the House of Illustration (2018-2019) and dedicated film festivals (2018; the Migration Film Festival), which have shown the recent considerations of using the arts as a medium of expression regarding themes of refugeehood and migration. Graphic visuals are fast becoming a key instrument when representing migration, and the central thesis of this paper is to show the strength and limitations of this form as well the methodology used by the actors in the production process. Recent works which have been released in the last ten years have not being analysed in the same context as previous graphic novels such as Palestine and Persepolis. While a lot of research has been done on the mass media portrayals of refugees in photography and journalism, there is a lack of literature on the representation with illustrations. There is little research about the accessibility of graphic novels such as where they can be found and what the intentions are when writing the novels. It is interesting to see why these authors, NGOs, and curators have decided to highlight these migrant narratives in a time when the mainstream media has done extensive coverage on the ‘refugee crisis’. Using primary data by doing one on one interviews with artists, curators, and NGOs, this paper investigates the efficiency of graphic novels for depicting refugee stories as a viable alternative to other mass medium forms. The paper has been divided into two distinct sections. The first part is concerned with the form of the comic itself and how it either limits or strengthens the representation of migrant narratives. This will involve analysing the layered and complex forms that comics allow such as multimedia pieces, use of photography and forms of symbolism. It will also show how the illustration allows for anonymity of refugees, the empathetic aspect of the form and how the history of the graphic novel form has allowed space for positive representations of women in the last decade. The second section will analyse the creative and methodological process which takes place by the actors and their involvement with the production of the works.Keywords: graphic novel, refugee, communication, media, migration
Procedia PDF Downloads 112