Search results for: power source
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10288

Search results for: power source

478 Nuclear Materials and Nuclear Security in India: A Brief Overview

Authors: Debalina Ghoshal

Abstract:

Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.

Keywords: India, nuclear security, nuclear materials, non proliferation

Procedia PDF Downloads 351
477 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages

Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall

Abstract:

Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.

Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact

Procedia PDF Downloads 239
476 Research Cooperation between of Ukraine in Terms of Food Chain Safety Control in the Frame of MICRORISK Project

Authors: Kinga Wieczorek, Elzbieta Kukier, Remigiusz Pomykala, Beata Lachtara, Renata Szewczyk, Krzysztof Kwiatek, Jacek Osek

Abstract:

The MICRORISK project (Research cooperation in assessment of microbiological hazard and risk in the food chain) was funded by the European Commission under the FP7 PEOPLE 2012 IRSES call within the International Research Staff Exchange Scheme of Marie Curie Action and realized during years from 2014 to 2015. The main aim of the project was to establish a cooperation between the European Union (EU) and the third State in the area important from the public health point of view. The following organizations have been engaged in the activity: National Veterinary Research Institute (NVRI) in Pulawy, Poland (coordinator), French Agency for Food, Environmental and Occupational Health & Safety (ANSES) in Maisons Alfort, France, National Scientific Center Institute of Experimental and Clinical Veterinary Medicine (NSC IECVM), Kharkov and State Scientific and Research Institute of Laboratory Diagnostics and Veterinary and Sanitary Expertise (SSRILDVSE) Kijev Ukraine. The results of the project showed that Ukraine used microbiological criteria in accordance with Commission Regulation (EC) No 2073/2005 of 15 November 2005 on microbiological criteria for foodstuffs. Compliance concerns both the criteria applicable at the stage of food safety (retail trade), as well as evaluation criteria and process hygiene in food production. In this case, the Ukrainian legislation also provides application of the criteria that do not have counterparts in the food law of the European Union, and are based on the provisions of Ukrainian law. Partial coherence of the Ukrainian and EU legal requirements in terms of microbiological criteria for food and feed concerns microbiological parameters such as total plate count, coliforms, coagulase-positive Staphylococcus spp., including S. aureus. Analysis of laboratory methods used for microbiological hazards control in food production chain has shown that most methods used in the EU are well-known by Ukrainian partners, and many of them are routinely applied as the only standards in the laboratory practice or simultaneously used with Ukrainian methods. The area without any legislation, where the EU regulation and analytical methods should be implemented is the area of Shiga toxin producing E. coli, including E. coli O157 and staphylococcal enterotoxin detection. During the project, the analysis of the existing Ukrainian and EU data concerning the prevalence of the most important food-borne pathogens on different stages of food production chain was performed. Particularly, prevalence of Salmonella spp., Campylobacter spp., L. monocytogenes as well as clostridia was examined. The analysis showed that poultry meat still appears to be the most important food-borne source of Campylobacter and Salmonella in the UE. On the other hand, L. monocytogenes were seldom detected above the legal safety limit (100 cfu/g) among the EU countries. Moreover, the analysis revealed the lack of comprehensive data regarding the prevalence of the most important food-borne pathogens in Ukraine. The results of the MICRORISK project are networking activities among researches originations participating in the tasks will help with a better recognition of each other regarding very important, from the public health point of view areas such as microbiological hazards in the food production chain and finally will help to improve food quality and safety for consumers.

Keywords: cooperation, European Union, food chain safety, food law, microbiological risk, Microrisk, Poland, Ukraine

Procedia PDF Downloads 374
475 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching

Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa

Abstract:

Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.

Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching

Procedia PDF Downloads 181
474 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications

Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian

Abstract:

The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.

Keywords: smart food packaging, supply chain management, food waste, radio frequency identification

Procedia PDF Downloads 112
473 The Strategic Role of Accommodation Providers in Encouraging Travelers to Adopt Environmentally-Friendly Modes of Transportation: An Experiment from France

Authors: Luc Beal

Abstract:

Introduction. Among the stakeholders involved in the tourist decision-making process, the accommodation provider has the potential to play a crucial role in raising awareness, disseminating information, and thus influencing the tourists’ choice of transportation. Since the early days of tourism, the accommodation provider has consistently served as the primary point of contact with the destination, and consequently, as the primary source of information for visitors. By offering accommodation and hospitality, the accommodation provider has evolved into a trusted third party, functioning as an 'ambassador' capable of recommending the finest attractions and activities available at the destination. In contemporary times, when tourists plan their trips, they make a series of consecutive decisions, with the most important decision being to lock-in the accommodation reservation for the earliest days, so as to secure a safe arrival. Consequently, tourists place their trust in the accommodation provider not only for lodging but also for recommendations regarding restaurants, activities, and more. Thus, the latter has the opportunity to inform and influence tourists well in advance of their arrival, particularly during the booking phase, namely when it comes to selecting their mode of transportation. The pressing need to reduce greenhouse gas emissions within the tourism sector presents an opportunity to underscore the influence that accommodation providers have historically exerted on tourist decision-making . Methodology A participatory research, currently ongoing in south-western France, in collaboration with a nationwide hotel group and several destination management organizations, aims at examining the factors that determine the ability of accommodation providers to influence tourist transportation choices. Additionally, the research seeks to identify the conditions that motivate accommodation providers to assume a proactive role, such as fostering customer loyalty, reduced distribution costs, and financial compensation mechanisms. A panel of hotels participated in a series of focus group sessions with tourists, with the objective of modeling the decision-making process of tourists regarding their choice of transportation mode and to identify and quantify the types and levels of incentives liable to encourage environmentally responsible choices. Individual interviews were also conducted with hotel staff, including receptionists and guest relations officers, to develop a framework for interactions with tourists during crucial decision-making moments related to transportation choices. The primary finding of this research indicates that financial incentives significantly outweigh symbolic incentives in motivating tourists to opt for eco-friendly modes of transportation. Another noteworthy result underscores the crucial impact of organizational conditions governing interactions with tourists both before and during their stay. These conditions greatly influence the ability to raise awareness at key decision-making moments and the possibility of gathering data about the chosen transportation mode during the stay. In conclusion, this research has led to the formulation of practical recommendations for accommodation providers and Destination Marketing Organizations (DMOs). These recommendations pertain to communication protocols with tourists, the collection of evidences confirming chosen transportation modes, and the implementation of necessary incentives. Through these measures, accommodation provider can assume a central role in guiding tourists towards making responsible choices in terms of transportation.

Keywords: accommodation provider, trusted third party, environmentally-friendly transportation, green house gas, tourist decision-making process

Procedia PDF Downloads 57
472 Gender-Transformative Education: A Pathway to Nourishing and Evolving Gender Equality in the Higher Education of Iran

Authors: Sepideh Mirzaee

Abstract:

Gender-transformative (G-TE) education is a challenging concept in the field of education and it is a matter of hot debate in the contemporary world. Paulo Freire as the prominent advocate of transformative education considers it as an alternative to conventional banking model of education. Besides, a more inclusive concept has been introduced, namely, G-TE, as an unbiased education fostering an environment of gender justice. As its main tenet, G-TE eliminates obstacles to education and improves social shifts. A plethora of contemporary research indicates that G-TE could completely revolutionize education systems by displacing inequalities and changing gender stereotypes. Despite significant progress in female education and its effects on gender equality in Iran, challenges persist. There are some deficiencies regarding gender disparities in the society and, education, specifically. As an example, the number of women with university degrees is on the rise; thus, there will be an increasing demand for employment in the society by them. Instead, many job opportunities remain occupied by men and it is seen as intolerable for the society to assign such occupations to women. In fact, Iran is regarded as a patriarchal society where educational contexts can play a critical role to assign gender ideology to its learners. Thus, such gender ideologies in the education can become the prevailing ideologies in the entire society. Therefore, improving education in this regard, can lead to a significant change in a society subsequently influencing the status of women not only within their own country but also on a global scale. Notably, higher education plays a vital role in this empowerment and social change. Particularly higher education can have a crucial part in imparting gender neutral ideologies to its learners and bringing about substantial change. It has the potential to alleviate the detrimental effects of gender inequalities. Therefore, this study aims to conceptualize the pivotal role of G-TE and its potential power in developing gender equality within the higher educational system of Iran presented within a theoretical framework. The study emphasizes the necessity of stablishing a theoretical grounding for citizenship, and transformative education while distinguishing gender related issues including gender equality, equity and parity. This theoretical foundation will shed lights on the decisions made by policy-makers, syllabus designers, material developers, and specifically professors and students. By doing so, they will be able to promote and implement gender equality recognizing the determinants, obstacles, and consequences of sustaining gender-transformative approaches in their classes within the Iranian higher education system. The expected outcomes include the eradication of gender inequality, transformation of gender stereotypes and provision of equal opportunities for both males and females in education.

Keywords: citizenship education, gender inequality, higher education, patriarchal society, transformative education

Procedia PDF Downloads 64
471 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study

Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo

Abstract:

5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.

Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test

Procedia PDF Downloads 128
470 Social Inequality and Inclusion Policies in India: Lessons Learned and the Way Forward

Authors: Usharani Rathinam

Abstract:

Although policies directing inclusion of marginalized were in effect, majority of chronically impoverished in India belonged to schedule caste and schedule tribes. Also, taking into account that poverty is gendered; destitute women belonged to lower social order whose need is not largely highlighted at policy level. This paper discusses on social relations poverty which highlights on how social order that existed structurally in the society can perpetuate chronic poverty, followed by a critical review on social inclusion policies of India, its merits and demerits in addressing chronic poverty. Multiple case study design is utilized to address this concern in four districts of India; Jhansi, Tikamgarh, Cuddalore and Anantapur. These four districts were selected by purposive sampling based on the criteria; the district should either be categorized as a backward district or should have a history of high poverty rate. Qualitative methods including eighty in-depth interviews, six focus group discussions, six social mapping procedures and three key informant interviews were conducted in 2011, at each of the locations. Analysis of the data revealed that irrespective of gender, schedule castes and schedule tribe participants were found to be chronically poor in all districts. Caste based discrimination is exhibited at both micro and macro levels; village and institutional levels. At village level, lower caste respondents had lesser access to public resources. Also, within institutional settings, due to confiscation, unequal access to resources is noticed, especially in fund distribution. This study found that half of the budget intended for schedule caste and schedule tribes were confiscated by upper caste administrative staffs. This implies that power based on social hierarchy marginalize lower caste participants from accessing better economic, social, and political benefits, that had led them to suffer long term poverty. This study also explored the traditional ties between caste, social structure and bonded labour as a cause of long-term poverty. Though equal access is being emphasized in constitutional rights, issues at micro level have not been reflected in formulation of these rights. Therefore, it is significant for a policy to consider the structural complexity and then focus on issues such as equal distribution of assets and infrastructural facilities that will reduce exclusion and foster long-term security in areas such as employment, markets and public distribution.

Keywords: caste, inclusion policies, India, social order

Procedia PDF Downloads 205
469 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes

Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil

Abstract:

Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.

Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles

Procedia PDF Downloads 288
468 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching

Authors: Woranittha Kritsarikan

Abstract:

Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.

Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching

Procedia PDF Downloads 176
467 Tests for Zero Inflation in Count Data with Measurement Error in Covariates

Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao

Abstract:

In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.

Keywords: count data, measurement error, score test, zero inflation

Procedia PDF Downloads 286
466 Posts by Influencers Promoting Water Saving: The Impact of Distance and the Perception of Effectiveness on Behavior

Authors: Sancho-Esper Franco, Rodríguez Sánchez Carla, Sánchez Carolina, Orús-Sanclemente Carlos

Abstract:

Water scarcity is a reality that affects many regions of the world and is aggravated by climate change and population growth. Saving water has become an urgent need to ensure the sustainability of the planet and the survival of many communities, where youth and social networks play a key role in promoting responsible practices and adopting habits that contribute to environmental preservation. This study analyzes the persuasion capacity of messages designed to promote pro-environmental behaviors among youth. Specifically, it studies how the efficacy (effectiveness) of the response (personal response efficacy/effectiveness) and the perception of distance from the source of the message influence the water-saving behavior of the audience. To do so, two communication frameworks are combined. First, the Construal Level Theory, which is based on the concept of "psychological distance", that is, people, objects or events can be perceived as psychologically near or far, and this subjective distance (i.e., social, temporal, or spatial) determines their attitudes, emotions, and actions. This perceived distance can be social, temporal, or spatial. This research focuses on studying the spatial distance and social distance generated by cultural differences between influencers and their audience to understand how cultural distance can influence the persuasiveness of a message. Research on the effects of psychological distance between influencers-followers in the pro-environmental field is very limited, being relevant because people could learn specific behaviors suggested by opinion leaders such as influencers in social networks. Second, different approaches to behavioral change suggest that the perceived efficacy of a behavior can explain individual pro-environmental actions. People will be more likely to adopt a new behavior if they perceive that they are capable of performing it (efficacy belief) and that their behavior will effectively contribute to solving that problem (personal response efficacy). It is also important to study the different actors (social and individual) that are perceived as responsible for addressing environmental problems. Specifically, we analyze to what extent the belief individual’s water-saving actions are effective in solving the problem can influence water-saving behavior since this individual effectiveness increases people's sense of obligation and responsibility with the problem. However, in this regard, empirical evidence presents mixed results. Our study addresses the call for experimental studies manipulating different subtypes of response effectiveness to generate robust causal evidence. Based on all the above, this research analyzes whether cultural distance (local vs. international influencer) and the perception of effectiveness of behavior (personal response efficacy) (personal/individual vs. collective) affect the actual behavior and the intention to conserve water of social network users. An experiment of 2 (local influencer vs. international influencer) x 2 (effectiveness of individual vs. collective response) is designed and estimated. The results show that a message from a local influencer appealing to individual responsibility exerts greater influence on intention and actual water-saving behavior, given the cultural closeness between influencer-follower, and the appeal to individual responsibility increases the feeling of obligation to participate in pro-environmental actions. These results offer important implications for social marketing campaigns that seek to promote water conservation.

Keywords: social marketing, influencer, message framing, experiment, personal response efficacy, water saving

Procedia PDF Downloads 62
465 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars

Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea

Abstract:

Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.

Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag

Procedia PDF Downloads 325
464 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks

Authors: Nafisa Mahbub, Hajo Ribberink

Abstract:

Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.

Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger

Procedia PDF Downloads 50
463 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing

Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn

Abstract:

Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.

Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency

Procedia PDF Downloads 111
462 Men of Congress in Today’s Brazil: Ethnographic Notes on Neoliberal Masculinities in Support of Bolsonaro

Authors: Joao Vicente Pereira Fernandez

Abstract:

In the context of a democratic crisis, a new wave of authoritarianism prompts domineering male figures to leadership posts worldwide. Although the gendered aspect of this phenomenon has been reasonably documented, recent studies have focused on high-level commanding posts, such as those of president and prime-minister, leaving other positions of political power with limited attention. This natural focus of investigation, however powerful, seems to have restricted our understanding of the phenomenon by precluding a more thorough inquiry of its gendered aspects and its consequences for political representation as a whole. Trying to fill this gap, in recent research, we examined the election results of Jair Bolsonaro’s party for the Legislative Branch in 2018. We found that the party's proportion of non-male representatives was on average, showing it provided reasonable access of women to the legislature in a comparative perspective. However, and perhaps more intuitively, we also found that the elected members of Bolsonaro’s party performed very gendered roles, which allowed us to draw the first lines of the representative profiles gathered around the new-right in Brazil. These results unveiled new horizons for further research, addressing topics that range from the role of women for the new-right on Brazilian institutional politics to the relations between these profiles of representatives, their agendas, and political and electoral strategies. This article aims to deepen the understanding of some of these profiles in order to lay the groundwork for the development of the second research agenda mentioned above. More specifically, it focuses on two out of the three profiles that were grasped predominantly, if not entirely, from masculine subjects during our last research, with the objective of portraying the masculinity standards mobilized and promoted by them. These profiles –the entrepreneur and the army man – were chosen to be developed due to their proximity to both liberal and authoritarian views, and, moreover, because they can possibly represent two facets of the new-right that were integrated in a certain way around Bolsonaro in 2018, but that can be reworked in the future. After a brief introduction of the literature on masculinity and politics in times of democratic crisis, we succinctly present the relevant results of our previous research and then describe these two profiles and their masculinities in detail. We adopt a combination of ethnography and discourse analysis, methods that allow us to make sense of the data we collected on our previous research as well as of the data gathered for this article: social media posts and interactions between the elected members that inspired these profiles and their supporters. Finally, we discuss our results, presenting our main argument on how these descriptions provide a further understanding of the gendered aspect of liberal authoritarianism, from where to better apprehend its political implications in Brazil.

Keywords: Brazilian politics, gendered politics, masculinities, new-right

Procedia PDF Downloads 119
461 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts

Authors: Antonieta Heyden Megale

Abstract:

With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.

Keywords: bilingual schools, identity, interculturality, narrative

Procedia PDF Downloads 336
460 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 84
459 Strategies for Arctic Greenhouse Farming: An Energy and Technology Survey of Greenhouse Farming in the North of Sweden

Authors: William Sigvardsson, Christoffer Alenius, Jenny Lindblom, Andreas Johansson, Marcus Sandberg

Abstract:

This article covers a study focusing on a subarctic greenhouse located in Nikkala, Sweden. Through a visit and the creation of a CFD model, the study investigates the differences in energy demand with high pressure sodium (HPS) lights and light emitting diode (LED) lights in combination with an air-carried and water-carried heating system accordingly. Through an IDA ICE model, the impact of insulating the parts of the greenhouse without active cultivation was also investigated. This, with the purpose of comparing the current system in the greenhouse to state-of-the-art alternatives and evaluating if an investment in either a water-carried heating system in combination with LED lights and insulating the non-cultivating parts of the greenhouse could be considered profitable. Operating a greenhouse in the harsh subarctic climate found in the northern parts of Sweden is not an easy task and especially if the operation is year-round. With an average temperature of under -5 °C from November through January, efficient growing techniques are a must to ensure a profitable business. Today the most crucial parts of a greenhouse are the heating system, lighting system, dehumidifying measures, as well as thermal screen, and the impact of a poorly designed system in a sub-arctic could be devastating as the margins are slim. The greenhouse studied uses a pellet burner to power their air- carried heating system which is used. The simulations found the resulting savings amounted to just under 14 800 SEK monthly or 18 % of the total cost of energy by implementing the water-carrying heating system in combination with the LED lamps. Given this, a payback period of 3-9 years could be expected given different scenarios, including specific time periods, financial aids, and the resale price of the current system. The insulation of the non-cultivating parts of the greenhouse was found to have possible savings of 25 300 SEK annually or 46 % of the current heat demand resulting in a payback period of just over 1-2 years. Given the possible energy savings, a reduction in emitted CO2 equivalents of almost 1,9 tonnes could be achieved annually. It was concluded that relatively inexpensive investments in modern greenhouse equipment could make a significant contribution to reducing the energy consumption of the greenhouse resulting in a more competitive business environment for sub-arctic greenhouse owners. New parts of the greenhouse should be built with the water-carried heating system in combination with state-of-the-art LED lights, and all parts which are not housing active cultivation should be insulated. If the greenhouse in Nikkala is eligible for financial aid or finds a resale value in the current system, an investment should be made in a new water-carried heating system in combination with LED lights.

Keywords: energy efficiency, sub-arctic greenhouses, energy measures, greenhouse climate control, greenhouse technology, CFD

Procedia PDF Downloads 74
458 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics

Authors: Maria Arechavaleta, Mark Halpin

Abstract:

In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.

Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems

Procedia PDF Downloads 233
457 Preschoolers’ Selective Trust in Moral Promises

Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu

Abstract:

Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.

Keywords: promise, trust, moral judgement, preschoolers

Procedia PDF Downloads 53
456 Argos System: Improvements and Future of the Constellation

Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard

Abstract:

Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.

Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services

Procedia PDF Downloads 179
455 Characterization of Double Shockley Stacking Fault in 4H-SiC Epilayer

Authors: Zhe Li, Tao Ju, Liguo Zhang, Zehong Zhang, Baoshun Zhang

Abstract:

In-grow stacking-faults (IGSFs) in 4H-SiC epilayers can cause increased leakage current and reduce the blocking voltage of 4H-SiC power devices. Double Shockley stacking fault (2SSF) is a common type of IGSF with double slips on the basal planes. In this study, a 2SSF in the 4H-SiC epilayer grown by chemical vaper deposition (CVD) is characterized. The nucleation site of the 2SSF is discussed, and a model for the 2SSF nucleation is proposed. Homo-epitaxial 4H-SiC is grown on a commercial 4 degrees off-cut substrate by a home-built hot-wall CVD. Defect-selected-etching (DSE) is conducted with melted KOH at 500 degrees Celsius for 1-2 min. Room temperature cathodoluminescence (CL) is conducted at a 20 kV acceleration voltage. Low-temperature photoluminescence (LTPL) is conducted at 3.6 K with the 325 nm He-Cd laser line. In the CL image, a triangular area with bright contrast is observed. Two partial dislocations (PDs) with a 20-degree angle in between show linear dark contrast on the edges of the IGSF. CL and LTPL spectrums are conducted to verify the IGSF’s type. The CL spectrum shows the maximum photoemission at 2.431 eV and negligible bandgap emission. In the LTPL spectrum, four phonon replicas are found at 2.468 eV, 2.438 eV, 2.420 eV and 2.410 eV, respectively. The Egx is estimated to be 2.512 eV. A shoulder with a red-shift to the main peak in CL, and a slight protrude at the same wavelength in LTPL are verified as the so called Egx- lines. Based on the CL and LTPL results, the IGSF is identified as a 2SSF. Back etching by neutral loop discharge and DSE are conducted to track the origin of the 2SSF, and the nucleation site is found to be a threading screw dislocation (TSD) in this sample. A nucleation mechanism model is proposed for the formation of the 2SSF. Steps introduced by the off-cut and the TSD on the surface are both suggested to be two C-Si bilayers height. The intersections of such two types of steps are along [11-20] direction from the TSD, while a four-bilayer step at each intersection. The nucleation of the 2SSF in the growth is proposed as follows. Firstly, the upper two bilayers of the four-bilayer step grow down and block the lower two at one intersection, and an IGSF is generated. Secondly, the step-flow grows over the IGSF successively, and forms an AC/ABCABC/BA/BC stacking sequence. Then a 2SSF is formed and extends by the step-flow growth. In conclusion, a triangular IGSF is characterized by CL approach. Base on the CL and LTPL spectrums, the estimated Egx is 2.512 eV and the IGSF is identified to be a 2SSF. By back etching, the 2SSF nucleation site is found to be a TSD. A model for the 2SSF nucleation from an intersection of off-cut- and TSD- introduced steps is proposed.

Keywords: cathodoluminescence, defect-selected-etching, double Shockley stacking fault, low-temperature photoluminescence, nucleation model, silicon carbide

Procedia PDF Downloads 315
454 Intelligent Indoor Localization Using WLAN Fingerprinting

Authors: Gideon C. Joseph

Abstract:

The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.

Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression

Procedia PDF Downloads 346
453 Violence against Women: A Study on the Aggressors' Profile

Authors: Giovana Privatte Maciera, Jair Izaías Kappann

Abstract:

Introduction: The violence against woman is a complex phenomenon that accompanies the woman throughout her life and is a result of a social, cultural, political and religious construction, based on the differences among the genders. Those differences are felt, mainly, because of the patriarchal system that is still present which just naturalize and legitimate the asymmetry of power. As consequence of the women’s lasting historical and collective effort for a legislation against the impunity of violence against women in the national scenery, it was ordained, in 2006, a law known as Maria da Penha. The law was created as a protective measure for women that were victims of violence and consequently for the punishment of the aggressor. Methodology: Analysis of police inquiries is established by the Police Station of Defense of the Woman of Assis city, by formal authorization of the justice, in the period of 2013 to 2015. For the evaluating of the results will be used the content analysis and the theoretical referential of Psychoanalysis. Results and Discussion: The final analysis of the inquiries demonstrated that the violence against women is reproduced by the society and the aggressor, in most cases it is a member of their own family, mainly the current or former-spouse. The most common kinds of aggression were: the threat bodily harm, and the physical violence, that normally happens accompanied by psychological violence, being the most painful for the victims. The biggest part of the aggressors was white, older than the victim, worker and had primary school. But, unlike the expected, the minority of the aggressors were users of alcohol and/or drugs and possessed children in common with the victim. There is a contrast among the number of victims who already admitted have suffered some type of violence earlier by the same aggressor and the number of victims who has registered the occurrence before. The aggressors often use the discourse of denial in their testimony or try to justify their act like the blame was of the victim. It is believed in the interaction of several factors that can influence the aggressor to commit the abuse, including psychological, personal and sociocultural factors. One hypothesis is that the aggressor has a violence history in the family origin. After the aggressor being judged, condemned or not, usually there is no rehabilitation plan or supervision that enable his change. Conclusions: It has noticed the importance of studying the aggressor’s characteristics and the reasons that took him to commit such violence, making possible the implementation of an appropriate treatment to prevent and reduce the aggressions, as well the creation of programs and actions that enable communication and understanding concerning the theme. This is because the recurrence is still high, since the punitive system is not enough and the law is still ineffective and inefficient in certain aspects and in its own functioning. It is perceived a compulsion in repeat so much for the victims as for the aggressors, because they end involving, almost always, in disturbed and violent relationships, with the relation of subordination-dominance as characteristic.

Keywords: aggressors' profile, gender equality, Maria da Penha law, violence against women

Procedia PDF Downloads 333
452 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 118
451 The One, the Many, and the Doctrine of Divine Simplicity: Variations on Simplicity in Essentialist and Existentialist Metaphysics

Authors: Mark Wiebe

Abstract:

One of the tasks contemporary analytic philosophers have focused on (e.g., Wolterstorff, Alston, Plantinga, Hasker, and Crisp) is the analysis of certain medieval metaphysical frameworks. This growing body of scholarship has helped clarify and prevent distorted readings of medieval and ancient writers. However, as scholars like Dolezal, Duby, and Brower have pointed out, these analyses have been incomplete or inaccurate in some instances, e.g., with regard to analogical speech or the doctrine of divine simplicity (DDS). Additionally, contributors to this work frequently express opposing claims or fail to note substantial differences between ancient and medieval thinkers. This is the case regarding the comparison between Thomas Aquinas and others. Anton Pegis and Étienne Gilson have argued along this line that Thomas’ metaphysical framework represents a fundamental shift. Gilson describes Thomas’ metaphysics as a turn from a form of “essentialism” to “existentialism.” One should argue that this shift distinguishes Thomas from many Analytic philosophers as well as from other classical defenders of the DDS. Moreover, many of the objections Analytic Philosophers make against Thomas presume the same metaphysical principles undergirding the above-mentioned form of essentialism. This weakens their force against Thomas’ positions. In order to demonstrate these claims, it will be helpful to consider Thomas’ metaphysical outlook alongside that of two other prominent figures: Augustine and Ockham. One area of their thinking which brings their differences to the surface has to do with how each relates to Platonic and Neo-Platonic thought. More specifically, it is illuminating to consider whether and how each distinguishes or conceives essence and existence. It is also useful to see how each approaches the Platonic conflicts between essence and individuality, unity and intelligibility. In both of these areas, Thomas stands out from Augustine and Ockham. Although Augustine and Ockham diverge in many ways, both ultimately identify being with particularity and pit particularity against both unity and intelligibility. Contrastingly, Thomas argues that being is distinct from and prior to essence. Being (i.e., Being in itself) rather than essence or form must therefore serve as the ground and ultimate principle for the existence of everything in which being and essence are distinct. Additionally, since change, movement, and addition improve and give definition to finite being, multitude and distinction are, therefore, principles of being rather than non-being. Consequently, each creature imitates and participates in God’s perfect Being in its own way; the perfection of each genus exists pre-eminently in God without being at odds with God’s simplicity, God has knowledge, power, and will, and these and the many other terms assigned to God refer truly to the being of God without being either meaningless or synonymous. The existentialist outlook at work in these claims distinguishes Thomas in a noteworthy way from his contemporaries and predecessors as much as it does from many of the analytic philosophers who have objected to his thought. This suggests that at least these kinds of objections do not apply to Thomas’ thought.

Keywords: theology, philosophy of religion, metaphysics, philosophy

Procedia PDF Downloads 71
450 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 105
449 Integration of EEG and Motion Tracking Sensors for Objective Measure of Attention-Deficit Hyperactivity Disorder in Pre-Schoolers

Authors: Neha Bhattacharyya, Soumendra Singh, Amrita Banerjee, Ria Ghosh, Oindrila Sinha, Nairit Das, Rajkumar Gayen, Somya Subhra Pal, Sahely Ganguly, Tanmoy Dasgupta, Tanusree Dasgupta, Pulak Mondal, Aniruddha Adhikari, Sharmila Sarkar, Debasish Bhattacharyya, Asim Kumar Mallick, Om Prakash Singh, Samir Kumar Pal

Abstract:

Background: We aim to develop an integrated device comprised of single-probe EEG and CCD-based motion sensors for a more objective measure of Attention-deficit Hyperactivity Disorder (ADHD). While the integrated device (MAHD) relies on the EEG signal (spectral density of beta wave) for the assessment of attention during a given structured task (painting three segments of a circle using three different colors, namely red, green and blue), the CCD sensor depicts movement pattern of the subjects engaged in a continuous performance task (CPT). A statistical analysis of the attention and movement patterns was performed, and the accuracy of the completed tasks was analysed using indigenously developed software. The device with the embedded software, called MAHD, is intended to improve certainty with criterion E (i.e. whether symptoms are better explained by another condition). Methods: We have used the EEG signal from a single-channel dry sensor placed on the frontal lobe of the head of the subjects (3-5 years old pre-schoolers). During the painting of three segments of a circle using three distinct colors (red, green, and blue), absolute power for delta and beta EEG waves from the subjects are found to be correlated with relaxation and attention/cognitive load conditions. While the relaxation condition of the subject hints at hyperactivity, a more direct CCD-based motion sensor is used to track the physical movement of the subject engaged in a continuous performance task (CPT) i.e., separation of the various colored balls from one table to another. We have used our indigenously developed software for the statistical analysis to derive a scale for the objective assessment of ADHD. We have also compared our scale with clinical ADHD evaluation. Results: In a limited clinical trial with preliminary statistical analysis, we have found a significant correlation between the objective assessment of the ADHD subjects with that of the clinician’s conventional evaluation. Conclusion: MAHD, the integrated device, is supposed to be an auxiliary tool to improve the accuracy of ADHD diagnosis by supporting greater criterion E certainty.

Keywords: ADHD, CPT, EEG signal, motion sensor, psychometric test

Procedia PDF Downloads 97