Search results for: force identification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5013

Search results for: force identification

453 Psychopathic Manager Behavior and the Employee Workplace Deviance: The Mediating Role of Revenge Motive, the Moderating Roles of Core Self-Evaluations and Attitude Importance

Authors: Sinem Bulkan

Abstract:

This study introduces the construct of psychopathic manager behaviour and aims for the development of psychopathic manager behaviour (Psycho-Man B) measure. The study also aims for the understanding of the relationship between psychopathic manager behaviour and workplace deviance while investigating the mediating role of a revenge motive and the moderating roles of the core self-evaluations and the attitude importance. Data were collected from 519 employees from a wide variety of jobs and industries who currently work for or previously worked for a manager in a collectivist culture, Turkey. Psycho-Man B Measure was developed resulting in five dimensions as opposed to the proposed ten dimensions. Simple linear and hierarchical regression analyses were conducted to test the hypotheses. The results of simple linear regression analyses showed that psychopathic manager behaviour was positively significantly related to supervisor-directed and organisation-directed deviance. Revenge motive towards the manager partially mediated the relationship between psychopathic manager behaviour and supervisor-directed deviance. Similarly, revenge motive towards the organisation partially mediated the relationship between psychopathic manager behaviour and organisation-directed deviance. Furthermore, no support was found for the expected moderating role of core self-evaluations in the revenge motive towards the manager-supervisor-directed deviance and revenge motive towards the organisation-organisation-directed deviance relationships. Attitude importance moderated the relationship between revenge motive towards the manager and supervisor-directed deviance; revenge motive towards the organisation and organisation-directed deviance. Moderated-mediation hypotheses were not supported for core self-evaluations but were supported for the attitude importance. Additional analyses for sub-dimensions were conducted to further examine the hypotheses. Demographic variables were examined through independent samples t-tests, and one way ANOVA. Finally, findings are discussed; limitations, suggestions and implications are presented. The major contribution of this study is that ‘psychopathic manager behaviour’ construct was introduced to the literature and a scale for the reliable identification of psychopathic manager behaviour was developed in to evaluate managers’ level of sub-clinical psychopathy in the workforce. The study introduced that employees engage in different forms of supervisor-directed deviance and organisation-directed deviance depending on the level of the emotions and personal goals. Supervisor-directed deviant behaviours and organisation-directed deviant behaviours became distinct in a way as impulsive and premeditated, active or passive, direct and indirect actions. Accordingly, it is important for organisations to notice that employees’ level of affective state and attitude importance for psychopathic manager behaviours predetermine the certain type of employee deviant behaviours.

Keywords: attitude importance, core self evaluations, psychopathic manager behaviour, revenge motive, workplace deviance

Procedia PDF Downloads 244
452 Kinetic Evaluation of Sterically Hindered Amines under Partial Oxy-Combustion Conditions

Authors: Sara Camino, Fernando Vega, Mercedes Cano, Benito Navarrete, José A. Camino

Abstract:

Carbon capture and storage (CCS) technologies should play a relevant role towards low-carbon systems in the European Union by 2030. Partial oxy-combustion emerges as a promising CCS approach to mitigate anthropogenic CO₂ emissions. Its advantages respect to other CCS technologies rely on the production of a higher CO₂ concentrated flue gas than these provided by conventional air-firing processes. The presence of more CO₂ in the flue gas increases the driving force in the separation process and hence it might lead to further reductions of the energy requirements of the overall CO₂ capture process. A higher CO₂ concentrated flue gas should enhance the CO₂ capture by chemical absorption in solvent kinetic and CO₂ cyclic capacity. They have impact on the performance of the overall CO₂ absorption process by reducing the solvent flow-rate required for a specific CO₂ removal efficiency. Lower solvent flow-rates decreases the reboiler duty during the regeneration stage and also reduces the equipment size and pumping costs. Moreover, R&D activities in this field are focused on novel solvents and blends that provide lower CO₂ absorption enthalpies and therefore lower energy penalties associated to the solvent regeneration. In this respect, sterically hindered amines are considered potential solvents for CO₂ capture. They provide a low energy requirement during the regeneration process due to its molecular structure. However, its absorption kinetics are slow and they must be promoted by blending with faster solvents such as monoethanolamine (MEA) and piperazine (PZ). In this work, the kinetic behavior of two sterically hindered amines were studied under partial oxy-combustion conditions and compared with MEA. A lab-scale semi-batch reactor was used. The CO₂ composition of the synthetic flue gas varied from 15%v/v – conventional coal combustion – to 60%v/v – maximum CO₂ concentration allowable for an optimal partial oxy-combustion operation. Firstly, 2-amino-2-methyl-1-propanol (AMP) showed a hybrid behavior with fast kinetics and a low enthalpy of CO₂ absorption. The second solvent was Isophrondiamine (IF), which has a steric hindrance in one of the amino groups. Its free amino group increases its cyclic capacity. In general, the presence of higher CO₂ concentration in the flue gas accelerated the CO₂ absorption phenomena, producing higher CO₂ absorption rates. In addition, the evolution of the CO2 loading also exhibited higher values in the experiments using higher CO₂ concentrated flue gas. The steric hindrance causes a hybrid behavior in this solvent, between both fast and slow kinetic solvents. The kinetics rates observed in all the experiments carried out using AMP were higher than MEA, but lower than the IF. The kinetic enhancement experienced by AMP at a high CO2 concentration is slightly over 60%, instead of 70% – 80% for IF. AMP also improved its CO₂ absorption capacity by 24.7%, from 15%v/v to 60%v/v, almost double the improvements achieved by MEA. In IF experiments, the CO₂ loading increased around 10% from 15%v/v to 60%v/v CO₂ and it changed from 1.10 to 1.34 mole CO₂ per mole solvent, more than 20% of increase. This hybrid kinetic behavior makes AMP and IF promising solvents for partial oxy–combustion applications.

Keywords: absorption, carbon capture, partial oxy-combustion, solvent

Procedia PDF Downloads 168
451 Intended Use of Genetically Modified Organisms, Advantages and Disadvantages

Authors: Pakize Ozlem Kurt Polat

Abstract:

GMO (genetically modified organism) is the result of a laboratory process where genes from the DNA of one species are extracted and artificially forced into the genes of an unrelated plant or animal. This technology includes; nucleic acid hybridization, recombinant DNA, RNA, PCR, cell culture and gene cloning techniques. The studies are divided into three groups of properties transferred to the transgenic plant. Up to 59% herbicide resistance characteristic of the transfer, 28% resistance to insects and the virus seems to be related to quality characteristics of 13%. Transgenic crops are not included in the commercial production of each product; mostly commercial plant is soybean, maize, canola, and cotton. Day by day increasing GMO interest can be listed as follows; Use in the health area (Organ transplantation, gene therapy, vaccines and drug), Use in the industrial area (vitamins, monoclonal antibodies, vaccines, anti-cancer compounds, anti -oxidants, plastics, fibers, polyethers, human blood proteins, and are used to produce carotenoids, emulsifiers, sweeteners, enzymes , food preservatives structure is used as a flavor enhancer or color changer),Use in agriculture (Herbicide resistance, Resistance to insects, Viruses, bacteria, fungi resistance to disease, Extend shelf life, Improving quality, Drought , salinity, resistance to extreme conditions such as frost, Improve the nutritional value and quality), we explain all this methods step by step in this research. GMO has advantages and disadvantages, which we explain all of them clearly in full text, because of this topic, worldwide researchers have divided into two. Some researchers thought that the GMO has lots of disadvantages and not to be in use, some of the researchers has opposite thought. If we look the countries law about GMO, we should know Biosafety law for each country and union. For this Biosecurity reasons, the problems caused by the transgenic plants, including Turkey, to minimize 130 countries on 24 May 2000, ‘the United Nations Biosafety Protocol’ signed nudes. This protocol has been prepared in addition to Cartagena Biosafety Protocol entered into force on September 11, 2003. This protocol GMOs in general use by addressing the risks to human health, biodiversity and sustainable transboundary movement of all GMOs that may affect the prevention, transit covers were dealt and used. Under this protocol we have to know the, ‘US Regulations GMO’, ‘European Union Regulations GMO’, ‘Turkey Regulations GMO’. These three different protocols have different applications and rules. World population increasing day by day and agricultural fields getting smaller for this reason feeding human and animal we should improve agricultural product yield and quality. Scientists trying to solve this problem and one solution way is molecular biotechnology which is including the methods of GMO too. Before decide to support or against the GMO, should know the GMO protocols and it effects.

Keywords: biotechnology, GMO (genetically modified organism), molecular marker

Procedia PDF Downloads 221
450 A Visualization Classification Method for Identifying the Decayed Citrus Fruit Infected by Fungi Based on Hyperspectral Imaging

Authors: Jiangbo Li, Wenqian Huang

Abstract:

Early detection of fungal infection in citrus fruit is one of the major problems in the postharvest commercialization process. The automatic and nondestructive detection of infected fruits is still a challenge for the citrus industry. At present, the visual inspection of rotten citrus fruits is commonly performed by workers through the ultraviolet induction fluorescence technology or manual sorting in citrus packinghouses to remove fruit subject with fungal infection. However, the former entails a number of problems because exposing people to this kind of lighting is potentially hazardous to human health, and the latter is very inefficient. Orange is used as a research object. This study would focus on this problem and proposed an effective method based on Vis-NIR hyperspectral imaging in the wavelength range of 400-1000 nm with a spectroscopic resolution of 2.8 nm. In this work, three normalization approaches are applied prior to analysis to reduce the effect of sample curvature on spectral profiles, and it is found that mean normalization was the most effective pretreatment for decreasing spectral variability due to curvature. Then, principal component analysis (PCA) was applied to a dataset composing of average spectra from decayed and normal tissue to reduce the dimensionality of data and observe the ability of Vis-NIR hyper-spectra to discriminate data from two classes. In this case, it was observed that normal and decayed spectra were separable along the resultant first principal component (PC1) axis. Subsequently, five wavelengths (band) centered at 577, 702, 751, 808, and 923 nm were selected as the characteristic wavelengths by analyzing the loadings of PC1. A multispectral combination image was generated based on five selected characteristic wavelength images. Based on the obtained multispectral combination image, the intensity slicing pseudocolor image processing method is used to generate a 2-D visual classification image that would enhance the contrast between normal and decayed tissue. Finally, an image segmentation algorithm for detection of decayed fruit was developed based on the pseudocolor image coupled with a simple thresholding method. For the investigated 238 independent set samples including infected fruits infected by Penicillium digitatum and normal fruits, the total success rate is 100% and 97.5%, respectively, and, the proposed algorithm also used to identify the orange infected by penicillium italicum with a 100% identification accuracy, indicating that the proposed multispectral algorithm here is an effective method and it is potential to be applied in citrus industry.

Keywords: citrus fruit, early rotten, fungal infection, hyperspectral imaging

Procedia PDF Downloads 278
449 Study on Changes of Land Use impacting the Process of Urbanization, by Using Landsat Data in African Regions: A Case Study in Kigali, Rwanda

Authors: Delphine Mukaneza, Lin Qiao, Wang Pengxin, Li Yan, Chen Yingyi

Abstract:

Human activities on land use make the land-cover gradually change or transit. In this study, we examined the use of Landsat TM data to detect the land use change of Kigali between 1987 and 2009 using remote sensing techniques and analysis of data using ENVI and ArcGIS, a GIS software. Six different categories of land use were distinguished: bare soil, built up land, wetland, water, vegetation, and others. With remote sensing techniques, we analyzed land use data in 1987, 1999 and 2009, changed areas were found and a dynamic situation of land use in Kigali city was found during the 22 years studied. According to relevant Landsat data, the research focused on land use change in accordance with the role of remote sensing in the process of urbanization. The result of the work has shown the rapid increase of built up land between 1987 and 1999 and a big decrease of vegetation caused by the rebuild of the city after the 1994 genocide, while in the period of 1999 to 2009 there was a reduction in built up land and vegetation, after the authority of Kigali city established, a Master Plan where all constructions which were not in the range of the master Plan were destroyed. Rwanda's capital, Kigali City, through the expansion of the urban area, it is increasing the internal employment rate and attracts business investors and the service sector to improve their economy, which will increase the population growth and provide a better life. The overall planning of the city of Kigali considers the environment, land use, infrastructure, cultural and socio-economic factors, the economic development and population forecast, urban development, and constraints specification. To achieve the above purpose, the Government has set for the overall planning of city Kigali, different stages of the detailed description of the design, strategy and action plan that would guide Kigali planners and members of the public in the future to have more detailed regional plans and practical measures. Thus, land use change is significantly the performance of Kigali active human area, which plays an important role for the country to take certain decisions. Another area to take into account is the natural situation of Kigali city. Agriculture in the region does not occupy a dominant position, and with the population growth and socio-economic development, the construction area will gradually rise and speed up the process of urbanization. Thus, as a developing country, Rwanda's population continues to grow and there is low rate of utilization of land, where urbanization remains low. As mentioned earlier, the 1994 genocide massacres, population growth and urbanization processes, have been the factors driving the dramatic changes in land use. The focus on further research would be on analysis of Rwanda’s natural resources, social and economic factors that could be, the driving force of land use change.

Keywords: land use change, urbanization, Kigali City, Landsat

Procedia PDF Downloads 288
448 Genetic Variations of Two Casein Genes among Maghrabi Camels Reared in Egypt

Authors: Othman E. Othman, Amira M. Nowier, Medhat El-Denary

Abstract:

Camels play an important socio-economic role within the pastoral and agricultural system in the dry and semidry zones of Asia and Africa. Camels are economically important animals in Egypt where they are dual purpose animals (meat and milk). The analysis of chemical composition of camel milk showed that the total protein contents ranged from 2.4% to 5.3% and it is divided into casein and whey proteins. The casein fraction constitutes 52% to 89% of total camel milk protein and it divided into 4 fractions namely αs1, αs2, β and κ-caseins which are encoded by four tightly genes. In spite of the important role of casein genes and the effects of their genetic polymorphisms on quantitative traits and technological properties of milk, the studies for the detection of genetic polymorphism of camel milk genes are still limited. Due to this fact, this work focused - using PCR-RFP and sequencing analysis - on the identification of genetic polymorphisms and SNPs of two casein genes in Maghrabi camel breed which is a dual purpose camel breed in Egypt. The amplified fragments at 488-bp of the camel κ-CN gene were digested with AluI endonuclease. The results showed the appearance of three different genotypes in the tested animals; CC with three digested fragments at 203-, 127- and 120-bp, TT with three digested fragments at 203-, 158- and 127-bp and CT with four digested fragments at 203-, 158-, 127- and 120-bp. The frequencies of three detected genotypes were 11.0% for CC, 48.0% for TT and 41.0% for CT genotypes. The sequencing analysis of the two different alleles declared the presence of a single nucleotide polymorphism (C→T) at position 121 in the amplified fragments which is responsible for the destruction of a restriction site (AG/CT) in allele T and resulted in the presence of two different alleles C and T in tested animals. The nucleotide sequences of κ-CN alleles C and T were submitted to GenBank with the accession numbers; KU055605 and KU055606, respectively. The primers used in this study amplified 942-bp fragments spanning from exon 4 to exon 6 of camel αS1-Casein gene. The amplified fragments were digested with two different restriction enzymes; SmlI and AluI. The results of SmlI digestion did not show any restriction site whereas the digestion with AluI endonuclease revealed the presence of two restriction sites AG^CT at positions 68^69 and 631^632 yielding the presence of three digested fragments with sizes 68-, 563- and 293-bp.The nucleotide sequences of this fragment from camel αS1-Casein gene were submitted to GenBank with the accession number KU145820. In conclusion, the genetic characterization of quantitative traits genes which are associated with the production traits like milk yield and composition is considered an important step towards the genetic improvement of livestock species through the selection of superior animals depending on the favorable alleles and genotypes; marker assisted selection (MAS).

Keywords: genetic polymorphism, SNP polymorphism, Maghrabi camels, κ-Casein gene, αS1-Casein gene

Procedia PDF Downloads 587
447 Food Design as a University-Industry Collaboration Project: An Experience Design on Controlling Chocolate Consumption and Long-Term Eating Behavior

Authors: Büşra Durmaz, Füsun Curaoğlu

Abstract:

While technology-oriented developments in the modern world change our perceptions of time and speed, they also force our food consumption patterns, such as getting pleasure from what we eat and eating slowly. The habit of eating quickly and hastily causes not only the feeling of not understanding the taste of the food eaten but also the inability to postpone the feeling of satiety and, therefore, many health problems. In this context, especially in the last ten years, in the field of industrial design, food manufacturers for healthy living and consumption have been collaborating with industrial designers on food design. The consumers of the new century, who are in an uncontrolled time intensity, receive support from small snacks as a source of happiness and pleasure in the little time intervals they can spare. At this point, especially chocolate has been a source of happiness for its consumers as a source of both happiness and pleasure for hundreds of years. However, when the portions have eaten cannot be controlled, a pleasure food such as chocolate can cause both health problems and many emotional problems, especially the feeling of guilt. Fast food, which is called food that is prepared and consumed quickly, has been increasing rapidly around the world in recent years. This study covers the process and results of a chocolate design based on the user experience of a university-industry cooperation project carried out within the scope of Eskişehir Technical University graduation projects. The aim of the project is a creative product design that will enable the user to experience chocolate consumption with a healthy eating approach. For this, while concepts such as pleasure, satiety, and taste are discussed; A survey with 151 people and semi-structured face-to-face interviews with 7 people during the experience design process within the scope of the user-oriented design approach, mainly literature review, within the scope of main topics such as mouth anatomy, tongue structure, taste, the functions of the eating action in the brain, hormones and chocolate, video A case study based on the research paradigm of Qualitative Research was structured within the scope of different research processes such as analysis and project diaries. As a result of the research, it has been reached that the melting in the mouth is the preferred experience of the users in order to spread the experience of eating chocolate for a long time based on pleasure while eating chocolate with healthy portions. In this context, researches about the production of sketches, mock-ups and prototypes of the product are included in the study. As a result, a product packaging design has been made that supports the active role of the senses such as sight, smell and hearing, where consumption begins, in order to consume chocolate by melting and to actively secrete the most important stimulus salivary glands in order to provide a healthy and long-term pleasure-based consumption.

Keywords: chocolate, eating habit, pleasure, saturation, sense of taste

Procedia PDF Downloads 58
446 Development a Home-Hotel-Hospital-School Community-Based Palliative Care Model for Patients with Cancer in Suratthani, Thailand

Authors: Patcharaporn Sakulpong, Wiriya Phokhwang

Abstract:

Background: Banpunrug (Love Sharing House) established in 2013 provides a community-based palliative care for patients with cancer from 7 provinces in southern Thailand. These patients come to receive outpatient chemotherapy and radiotherapy at Suratthani Cancer Hospital. They are poor and uneducated; they need an accommodation during their 30-45 day course of therapy. Methods: A community-participatory action research (PAR) was employed to establish a model of palliative care for patients with cancer. The participants included health care providers, community, and patients and families. The PAR process includes problem identification and need assessment, community and team establishment, field survey, organization founding, model of care planning, action and inquiry (PDCA), outcome evaluation, and model distribution. Results: The model of care at Banpunrug involves the concepts of HHHS model, in that Banpunrug is a Home for patients; patients live in a house comfortable like in a Hotel resource; the patients are given care and living facilities similarly to those in a Hospital; the house is a School for patients to learn how to take care themselves, how to live well with cancer, and most importantly how to prepare themselves for a good death. The house is also a humanized care school for health care providers. Banpunrug’s philosophy of care is based on friendship therapy, social and spiritual support, community partnership, patient-family centeredness, Live & Love sharing house, and holistic and humanized care. With this philosophy, the house is managed as a home of the patients and everyone involved; everything is costless for all eligible patients and their family members; all facilities and living expense are donated from benevolent people, friends, and community. Everyone, including patients and family, has a sense of belonging to the house and there is no authority between health care providers and the patients in the house. The house is situated in a temple and a community and supported by many local nonprofit organizations and healthcare facilities such as a health promotion hospital at sub-disctrict level and Suratthani Cancer Hospital. Village health volunteers and multi-professional health care volunteers have contributed not only appropriate care, but also knowledge and experience to develop a distinguishing HHHS community-based palliative care model for patients with cancer. Since its opening the house has been a home for more than 400 patients and 300 family members. It is also a model for many national and international healthcare organizations and providers, who come to visit and learn about palliative care in and by community. Conclusions: The success of this palliative care model comes from community involvement, multi-professional volunteers and distributions, and concepts of HHHS model. Banpunrug promotes a consistent care across the cancer trajectory independent of prognosis in order to strengthen a full integration of palliative

Keywords: community-based palliative care, model, participatory action research, patients with cancer

Procedia PDF Downloads 253
445 Midterm Clinical and Functional Outcomes After Treatment with Ponseti Method for Idiopathic Clubfeet: A Prospective Cohort Study

Authors: Neeraj Vij, Amber Brennan, Jenni Winters, Hadi Salehi, Hamy Temkit, Emily Andrisevic, Mohan V. Belthur

Abstract:

Idiopathic clubfoot is a common lower extremity deformity with an incidence of 1:500. The Ponseti Method is well known as the gold standard of treatment. However, there is limited functional data demonstrating correction of the clubfoot after treatment with the Ponseti method. The purpose of this study was to study the clinical and functional outcomes after the Ponseti method with the Clubfoot Disease-Specific Instrument (CDS) and pedobarography. This IRB-approved prospective study included patients aged 3-18 who were treated for idiopathic clubfoot with the Ponseti method between January 2008 and December 2018. Age-matched controls were identified through siblings of clubfoot patients and other community members. Treatment details were collected through a chart review of the included patients. Laboratory assessment included a physical exam, gait analysis, and pedobarography. The Pediatric Outcomes Data Collection Instrument and the Clubfoot Disease-Specific Instrument were also obtained on clubfoot patients (CF). The Wilcoxson rank-sum test was used to study differences between the CF patients and the typically developing (TD) patients. Statistical significance was set at p < 0.05. There were a total of 37 enrolled patients in our study. 21 were priorly treated for CF and 16 were TD. 94% of the CF patients had bilateral involvement. The age at the start of treatment was 29 days, the average total number of casts was seven to eight, and the average total number of casts after Achilles tenotomy was one. The reoccurrence rate was 25%, tenotomy was required in 94% of patients, and ≥1 tenotomy was required in 25% of patients. There were no significant differences between step length, step width, stride length, force-time integral, maximum peak pressure, foot progression angles, stance phase time, single-limb support time, double limb support time, and gait cycle time between children treated with the Ponseti method and typically developing children. The average post-treatment Pirani and Dimeglio scores were 5.50±0.58 and 15.29±1.58, respectively. The average post-treatment PODCI subscores were: Upper Extremity: 90.28, Transfers: 94.6, Sports: 86.81, Pain: 86.20, Happiness: 89.52, Global: 88.6. The average post-treatment Clubfoot Disease-Specific Instrument scores subscores were: Satisfaction: 73.93, Function: 80.32, Overall: 78.41. The Ponseti Method has a very high success rate and remains to be the gold standard in the treatment of idiopathic clubfoot. Timely management leads to good outcomes and a low need for repeated Achilles tenotomy. Children treated with the Ponseti method demonstrate good functional outcomes as measured through pedobarography. Pedobarography may have clinical utility in studying congenital foot deformities. Objective measures for hours of brace wear could represent an improvement in clubfoot care.

Keywords: functional outcomes, pediatric deformity, patient-reported outcomes, talipes equinovarus

Procedia PDF Downloads 59
444 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing

Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel

Abstract:

There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.

Keywords: climate change, resilience, remote sensing, demographic and health surveys

Procedia PDF Downloads 145
443 Regulating the Ottomans on Turkish Television and the Making of Good Citizens

Authors: Chien Yang Erdem

Abstract:

This paper takes up the proliferating historical dramas and children’s programs featuring the Ottoman-Islamic legacy on Turkish television as a locus where the processes of subjectification take place. A critical analysis of this emergent cultural phenomenon reveals an alliance of neoliberal and neoconservative political rationalities based on which the Turkish media is restructured to transform society. The existing debates have focused on how the Ottoman historical dramas manifest the Justice and Development Party’s (Adalet ve Kalkınma Partisi) neo-Ottomanist ideology and foreign policy. However, this approach tends to overlook the more complex relationship between the media, government, and society. Employing Michel Foucault’s notion of 'technologies of the self,' this paper aims to examine the governing practices that are deployed to regulate the media and to transform individual citizens into governable subjects in contemporary Turkey. First, through a brief discussion of recent development of the Turkish media towards an authoritarian model, the paper suggests that the relation between the Ottoman television drama and the political subject in question cannot be adequately examined without taking into account the force of the market. Second, by focusing on the managerial restructuring of the Turkish Television and Radio Corporation (Türkiye Radyo ve Televizyon Kurumu), the paper aims to illustrate the rationale and process through which the Turkish media sector is transformed into an integral part of the free market where the government becomes a key actor. The paper contends that this new sphere of free market is organized in a way that enables direct interference of the government and divides media practitioners and consumers into opposing categories through their own participation in the media market. On the one hand, a 'free subject' is constituted based on the premise that the market is a sphere where individuals are obliged to exercise their right to freedom (of choice, lifestyle, and expression). On the other hand, this 'free subject' is increasingly subjugated to such disciplinary practices as censorship for being on the wrong side of the government. Finally, the paper examines the relation between the restructured Turkish media market and the proliferation of Ottoman television drama in the 2010s. The study maintains that the reorganization of the media market has produced a condition where private sector is encouraged to take an active role in reviving Turkey’s Ottoman-Islamic cultural heritage and promulgating moral-religious values. Paying specific attention to the controversial case of Magnificent Century (Muhteşem Yüzyıl) in contrast with TRT’s Ottoman historical drama and children’s programs, the paper aims to identify the ways in which individual citizens are directed to conduct themselves as a virtuous citizenry. It is through the double movement between the governing practices associated with the media market and those concerning the making of a 'conservative generation' that a subject of citizenry of new Turkey is constituted.

Keywords: neoconservatism, neoliberalism, ottoman historical drama, technologies of the self, Turkish television

Procedia PDF Downloads 118
442 Outcomes of Pregnancy in Women with TPO Positive Status after Appropriate Dose Adjustments of Thyroxin: A Prospective Cohort Study

Authors: Revathi S. Rajan, Pratibha Malik, Nupur Garg, Smitha Avula, Kamini A. Rao

Abstract:

This study aimed to analyse the pregnancy outcomes in patients with TPO positivity after appropriate L-Thyroxin supplementation with close surveillance. All pregnant women attending the antenatal clinic at Milann-The Fertility Center, Bangalore, India- from Aug 2013 to Oct 2014 whose booking TSH was more than 2.5 mIU/L were included along with those pregnant women with prior hypothyroidism who were TPO positive. Those with TPO positive status were vigorously managed with appropriate thyroxin supplementation and the doses were readjusted every 3 to 4 weeks until delivery. Women with recurrent pregnancy loss were also tested for TPO positivity and if tested positive, were monitored serially with TSH and fT4 levels every 3 to 4 weeks and appropriately supplemented with thyroxin when the levels fluctuated. The testing was done after an informed consent in all these women. The statistical software namely SAS 9.2, SPSS 15.0, Stata 10.1, MedCalc 9.0.1, Systat 12.0 and R environment ver.2.11.1 were used for the analysis of the data. 460 pregnant women were screened for thyroid dysfunction at booking of which 52% were hypothyroid. Majority of them (31.08%) were subclinically hypothyroid and the remaining were overt. 25% of the total no. of patients screened were TPO positive. The various pregnancy complications that were observed in the TPO positive women were gestational glucose intolerance [60%], threatened abortion [21%], midtrimester abortion [4.3%], premature rupture of membranes [4.3%], cervical funneling [4.3%] and fetal growth restriction [3.5%]. 95.6% of the patients who followed up till the end delivered beyond 30 weeks. 42.6% of these patients had previous history of recurrent abortions or adverse obstetric outcome and 21.7% of the delivered babies required NICU admission. Obstetric outcomes in our study in terms of midtrimester abortions, placental abruption, and preterm delivery improved for the better after close monitoring of the thyroid hormone [TSH and fT4] levels every 3 to 4 weeks with appropriate dose adjustment throughout pregnancy. Euthyroid women with TPO positive status enrolled in the study incidentally were those with recurrent abortions/infertility and required thyroxin supplements due to elevated Thyroid hormone (TSH, fT4) levels during the course of their pregnancy. Significant associations were found with age>30 years and Hyperhomocysteinemia [p=0.017], recurrent pregnancy loss or previous adverse obstetric outcomes [p=0.067] and APLA [p=0.029]. TPO antibody levels >600 I U/ml were significantly associated with development of gestational hypertension [p=0.041] and fetal growth restriction [p=0.082]. Euthyroid women with TPO positivity were also screened periodically to counter fluctuations of the thyroid hormone levels with appropriate thyroxin supplementation. Thus, early identification along with aggressive management of thyroid dysfunction and stratification of these patients based on their TPO status with appropriate thyroxin supplementation beginning in the first trimester will aid risk modulation and also help avert complications.

Keywords: TPO antibody, subclinical hypothyroidism, anti nuclear antibody, thyroxin

Procedia PDF Downloads 305
441 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 84
440 Eucalyptus camaldulensis Leaves Attacked by the Gall Wasp Leptocybe invasa: A Phyto-Volatile Constituents Study

Authors: Maged El-Sayed Mohamed

Abstract:

Eucalyptus camaldulensis is one on the most well-known species of the genus Eucalyptus in the Middle east, its importance relay on the high production of its unique volatile constituents which exhibits many medicinal and pharmacological activities. The gall-forming wasp (Leptocybe invasa) has recently come into sight as the main pest attacking E. camaldulensis and causing severe injury. The wasp lays its eggs in the petiole and midrib of leaves and stems of young shoots of E. camaldulensis, which leads to gall formation. Gall formation by L. invasa damages growing shoot and leaves of Eucalyptus, resulting in abscission of leaves and drying. AIM: This study is an attempt to investigate the effect of the gall wasp (Leptocybe invasa) attack on the volatile constitutes of E. camaldulensis. This could help in the control of this wasp through stimulating plant defenses or production of a new allelochemicals or insecticide. The study of volatile constitutes of Eucalyptus before and after attack by the wasp can help the re-use and recycle of the infected Eucalyptus trees for new pharmacological and medicinal activities. Methodology: The fresh gall wasp-attacked and healthy leaves (100 g each) were cut and immediately subjected to hydrodistillation using Clevenger-type apparatus for 3 hours. The volatile fractions isolated were analyzed using Gas chromatography/mass spectrometry (GC/MS). Kovat’s retention indices (RI) were calculated with respect to a set of co-injected standard hydrocarbons (C10-C28). Compounds were identified by comparing their spectral data and retention indices with Wiley Registry of Mass Spectral Data 10th edition (April 2013), NIST 11 Mass Spectral Library (NIST11/2011/EPA/NIH) and literature data. Results: Fifty-nine components representing 89.13 and 88.60% of the total volatile fraction content respectively were quantitatively analyzed. Twenty-six major compounds at an average concentration greater than 0.1 ± 0.02% have been used for the statistical comparison. From those major components, twenty-one were found in both the attacked and healthy Eucalyptus leaves’ fractions in different concentration and five components, mono terpene p-Mentha-2-4(8) diene and the sesquiterpenes δ-elemene, β-elemene, E-caryophyllene and Bicyclogermacrene, were unique and only produced in the attacked-leaves’ fraction. CONCLUSION: Newly produced components or those commonly found in the volatile fraction and changed in concentration could represent a part of the plant defense mechanisms or might be an element of the plant allelopathic and communication mechanisms. Identification of the components of the gall wasp-damaged leaves can help in their recycling for different physiological, pharmacological and medicinal uses.

Keywords: Eucalyptus camaldulensis, eucalyptus recycling, gall wasp, Leptocybe invasa, plant defense mechanisms, Terpene fraction

Procedia PDF Downloads 333
439 Use of Analytic Hierarchy Process for Plant Site Selection

Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat

Abstract:

This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.

Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings

Procedia PDF Downloads 270
438 Forced Migrants in Israel and Their Impact on the Urban Structure of Southern Neighborhoods of Tel Aviv

Authors: Arnon Medzini, Lilach Lev Ari

Abstract:

Migration, the driving force behind increased urbanization, has made cities much more diverse places to live in. Nearly one-fifth of all migrants live in the world’s 20 largest cities. In many of these global cities, migrants constitute over a third of the population. Many of contemporary migrants are in fact ‘forced migrants,’ pushed from their countries of origin due to political or ethnic violence and persecution or natural disasters. During the past decade, massive numbers of labor migrants and asylum seekers have migrated from African countries to Israel via Egypt. Their motives for leaving their countries of origin include ongoing and bloody wars in the African continent as well as corruption, severe conditions of poverty and hunger, and economic and political disintegration. Most of the African migrants came to Israel from Eritrea and Sudan as they saw Israel the closest natural geographic asylum to Africa; soon they found their way to the metropolitan Tel-Aviv area. There they concentrated in poor neighborhoods located in the southern part of the city, where they live under conditions of crowding, poverty, and poor sanitation. Today around 45,000 African migrants reside in these neighborhoods, and yet there is no legal option for expelling them due to dangers they might face upon returning to their native lands. Migration of such magnitude to the weakened neighborhoods of south Tel-Aviv can lead to the destruction of physical, social and human infrastructures. The character of the neighborhoods is changing, and the local population is the main victim. These local residents must bear the brunt of the failure of both authorities and the government to handle the illegal inhabitants. The extremely crowded living conditions place a heavy burden on the dilapidated infrastructures in the weakened areas where the refugees live and increase the distress of the veteran residents of the neighborhoods. Some problems are economic and some stem from damage to the services the residents are entitled to, others from a drastic decline in their standard of living. Even the public parks no longer serve the purpose for which they were originally established—the well-being of the public and the neighborhood residents; they have become the main gathering place for the infiltrators and a center of crime and violence. Based on secondary data analysis (for example: The Israel’s Population, Immigration and Border Authority, the hotline for refugees and migrants), the objective of this presentation is to discuss the effects of forced migration to Tel Aviv on the following tensions: between the local population and the immigrants; between the local population and the state authorities, and between human rights groups vis-a-vis nationalist local organizations. We will also describe the changes which have taken place in the urban infrastructure of the city of Tel Aviv, and discuss the efficacy of various Israeli strategic trajectories when handling human problems arising in the marginal urban regions where the forced migrant population is concentrated.

Keywords: African asylum seekers, forced migrants, marginal urban regions, urban infrastructure

Procedia PDF Downloads 230
437 Evaluation of Suspended Particles Impact on Condensation in Expanding Flow with Aerodynamics Waves

Authors: Piotr Wisniewski, Sławomir Dykas

Abstract:

Condensation has a negative impact on turbomachinery efficiency in many energy processes.In technical applications, it is often impossible to dry the working fluid at the nozzle inlet. One of the most popular working fluid is atmospheric air that always contains water in form of steam, liquid, or ice crystals. Moreover, it always contains some amount of suspended particles which influence the phase change process. It is known that the phenomena of evaporation or condensation are connected with release or absorption of latent heat, what influence the fluid physical properties and might affect the machinery efficiency therefore, the phase transition has to be taken under account. This researchpresents an attempt to evaluate the impact of solid and liquid particles suspended in the air on the expansion of moist air in a low expansion rate, i.e., with expansion rate, P≈1000s⁻¹. The numerical study supported by analytical and experimental research is presented in this work. The experimental study was carried out using an in-house experimental test rig, where nozzle was examined for different inlet air relative humidity values included in the range of 25 to 51%. The nozzle was tested for a supersonic flow as well as for flow with shock waves induced by elevated back pressure. The Schlieren photography technique and measurement of static pressure on the nozzle wall were used for qualitative identification of both condensation and shock waves. A numerical model validated against experimental data available in the literature was used for analysis of occurring flow phenomena. The analysis of the suspended particles number, diameter, and character (solid or liquid) revealed their connection with heterogeneous condensation importance. If the expansion of fluid without suspended particlesis considered, the condensation triggers so called condensation wave that appears downstream the nozzle throat. If the solid particles are considered, with increasing number of them, the condensation triggers upwind the nozzle throat, decreasing the condensation wave strength. Due to the release of latent heat during condensation, the fluid temperature and pressure increase, leading to the shift of normal shock upstream the flow. Owing relatively large diameters of the droplets created during heterogeneous condensation, they evaporate partially on the shock and continues to evaporate downstream the nozzle. If the liquid water particles are considered, due to their larger radius, their do not affect the expanding flow significantly, however might be in major importance while considering the compression phenomena as they will tend to evaporate on the shock wave. This research proves the need of further study of phase change phenomena in supersonic flow especially considering the interaction of droplets with the aerodynamic waves in the flow.

Keywords: aerodynamics, computational fluid dynamics, condensation, moist air, multi-phase flows

Procedia PDF Downloads 97
436 The Incorporation of Themes Related to Islandness in Tourism Branding among Cold-Water, Warm-Water, and Temperate-Water Islands

Authors: Susan C. Graham

Abstract:

Islands have a long established allure for travellers the world over. From earliest accounts of human history, travellers were drawn by the sense of islandness embodied by these destinations. The concept of islandness describes the essence of what makes islands unique relative to non-islands and extends beyond geographic interpretations by attempting to capture the specific sense of self-exhibited by islanders in relation to their connection to place. The themes most strongly associated with islandness include a) a strong connection to water as both the life blood and a physical barrier, b) a unique culture and robust arts community that is deeply linked to both the island and islanders, c) an appreciation of and for nature, d) a rich sense of history and tradition connected to the place, e) a sense of community and belonging that arose through shared triumphs and struggles, and f) a profound awareness of independence, separateness, and uniqueness derived from both physical and social experience. The island brand, like all brands, is a marketing tactic designed to succinctly express a specific value proposition in simplistic ways which might include a brand symbol, logo, slogan, or representation meant to distinguish one brand from another. If a value proposition is the identification of attributes that separate one brand from another by highlighting the brand’s uniqueness, then presumably island brands may, at least in part, emphasize islandness as part of the destination brand. Yet it may in naïve to expect all islands to brand themselves using similar themes when islands can differ so substantially in terms of population, geography, political climate, economy, culture, and history. Of particular interest is the increased focus on tourism among 'cold-water' islands. This paper will examine the incorporation of themes related to islandness in tourism branding among cold-water, warm-water, and temperate-water islands. The tourism logos of 83 islands were collected and assessed for the use of themes related to islandness, namely water, arts and culture, nature, history and tradition, community and belongingness, and independence, separateness, and uniqueness. The ratings for each theme related to islandness for each of the 83 island destinations were then analyzed to identify if differences exist between cold-water, warm-water, and temperate-water islands. A general consensus of what constitutes 'cold-water' destinations is lacking, therefore a water temperature of 15C was adopted using the guidelines from the National Center for Cold Water Safety. Among these 83 islands, the average high and average low water temperatures of 196 specific locations, including the capital, northern, and southern most points of each island, was recorded to determine if the location was a cold-water (average high and low below 15C), warm-water (average high and low above 15C), or temperate-water (average high above 15C and low below 15C) location.

Keywords: branding, cold-water, islands, tourism

Procedia PDF Downloads 203
435 Addressing Supply Chain Data Risk with Data Security Assurance

Authors: Anna Fowler

Abstract:

When considering assets that may need protection, the mind begins to contemplate homes, cars, and investment funds. In most cases, the protection of those assets can be covered through security systems and insurance. Data is not the first thought that comes to mind that would need protection, even though data is at the core of most supply chain operations. It includes trade secrets, management of personal identifiable information (PII), and consumer data that can be used to enhance the overall experience. Data is considered a critical element of success for supply chains and should be one of the most critical areas to protect. In the supply chain industry, there are two major misconceptions about protecting data: (i) We do not manage or store confidential/personally identifiable information (PII). (ii) Reliance on Third-Party vendor security. These misconceptions can significantly derail organizational efforts to adequately protect data across environments. These statistics can be exciting yet overwhelming at the same time. The first misconception, “We do not manage or store confidential/personally identifiable information (PII)” is dangerous as it implies the organization does not have proper data literacy. Enterprise employees will zero in on the aspect of PII while neglecting trade secret theft and the complete breakdown of information sharing. To circumvent the first bullet point, the second bullet point forges an ideology that “Reliance on Third-Party vendor security” will absolve the company from security risk. Instead, third-party risk has grown over the last two years and is one of the major causes of data security breaches. It is important to understand that a holistic approach should be considered when protecting data which should not involve purchasing a Data Loss Prevention (DLP) tool. A tool is not a solution. To protect supply chain data, start by providing data literacy training to all employees and negotiating the security component of contracts with vendors to highlight data literacy training for individuals/teams that may access company data. It is also important to understand the origin of the data and its movement to include risk identification. Ensure processes effectively incorporate data security principles. Evaluate and select DLP solutions to address specific concerns/use cases in conjunction with data visibility. These approaches are part of a broader solutions framework called Data Security Assurance (DSA). The DSA Framework looks at all of the processes across the supply chain, including their corresponding architecture and workflows, employee data literacy, governance and controls, integration between third and fourth-party vendors, DLP as a solution concept, and policies related to data residency. Within cloud environments, this framework is crucial for the supply chain industry to avoid regulatory implications and third/fourth party risk.

Keywords: security by design, data security architecture, cybersecurity framework, data security assurance

Procedia PDF Downloads 67
434 Thermal Energy Storage Based on Molten Salts Containing Nano-Particles: Dispersion Stability and Thermal Conductivity Using Multi-Scale Computational Modelling

Authors: Bashar Mahmoud, Lee Mortimer, Michael Fairweather

Abstract:

New methods have recently been introduced to improve the thermal property values of molten nitrate salts (a binary mixture of NaNO3:KNO3in 60:40 wt. %), by doping them with minute concentration of nanoparticles in the range of 0.5 to 1.5 wt. % to form the so-called: Nano-heat-transfer-fluid, apt for thermal energy transfer and storage applications. The present study aims to assess the stability of these nanofluids using the advanced computational modelling technique, Lagrangian particle tracking. A multi-phase solid-liquid model is used, where the motion of embedded nanoparticles in the suspended fluid is treated by an Euler-Lagrange hybrid scheme with fixed time stepping. This technique enables measurements of various multi-scale forces whose characteristic (length and timescales) are quite different. Two systems are considered, both consisting of 50 nm Al2O3 ceramic nanoparticles suspended in fluids of different density ratios. This includes both water (5 to 95 °C) and molten nitrate salt (220 to 500 °C) at various volume fractions ranging between 1% to 5%. Dynamic properties of both phases are coupled to the ambient temperature of the fluid suspension. The three-dimensional computational region consists of a 1μm cube and particles are homogeneously distributed across the domain. Periodic boundary conditions are enforced. The particle equations of motion are integrated using the fourth order Runge-Kutta algorithm with a very small time-step, Δts, set at 10-11 s. The implemented technique demonstrates the key dynamics of aggregated nanoparticles and this involves: Brownian motion, soft-sphere particle-particle collisions, and Derjaguin, Landau, Vervey, and Overbeek (DLVO) forces. These mechanisms are responsible for the predictive model of aggregation of nano-suspensions. An energy transport-based method of predicting the thermal conductivity of the nanofluids is also used to determine thermal properties of the suspension. The simulation results confirms the effectiveness of the technique. The values are in excellent agreement with the theoretical and experimental data obtained from similar studies. The predictions indicates the role of Brownian motion and DLVO force (represented by both the repulsive electric double layer and an attractive Van der Waals) and its influence in the level of nanoparticles agglomeration. As to the nano-aggregates formed that was found to play a key role in governing the thermal behavior of nanofluids at various particle concentration. The presentation will include a quantitative assessment of these forces and mechanisms, which would lead to conclusions about nanofluids, heat transfer performance and thermal characteristics and its potential application in solar thermal energy plants.

Keywords: thermal energy storage, molten salt, nano-fluids, multi-scale computational modelling

Procedia PDF Downloads 170
433 Antibacterial Effects of Zinc Oxide Nanoparticles as Alternative Therapy on Drug-Resistant Group B Streptococcus Strains Isolated from Pregnant Women

Authors: Leila Fozouni, Anahita Mazandarani

Abstract:

Background: Maternal infections are the most common cause of infections in infants, and the level of infection and its severity highly depends on the degree of colonization of the bacteria in the mother; so, the occurrence of aggressive diseases is not unpredictable in mothers with very high colonization. Group B Streptococcus is part of the normal flora of the gastrointestinal and genital tracts in women and is the leading cause of septicemia and meningitis in newborns. Today Zinc oxide nanoparticle is regarded as one of the most commonly used and safest nanoparticles for defeating Gram-positive and Gram-negative bacteria. This study aims to determine the antibacterial effects of Zinc oxide on the growth of drug-resistant group B Streptococcus strains isolated from pregnant women. Materials and Methods: This cross-sectional study was conducted on 150 pregnant women of 28–37 weeks admitted to seven hospitals and maternity wards in Golestan province, northeast of Iran. For bacterial identification, rectovaginal swabs were firstly inoculated to the Todd-Hewitt Broth and cultured in blood agar (containing 5% sheep blood). Then microbiologic and PCR methods were performed to detect group B Streptococci. Disk diffusion and broth microdilution tests were used to determine the bacterial susceptibility to antibiotics according to CLSI M100(2021) criteria. The antibacterial properties of Zinc oxide nanoparticles were evaluated using the agar well-diffusion method. Results: The prevalence of group B Streptococcus was 18% in pregnant women. Out of twenty-seven positive cultures, 62.96% were higher than thirty years old. Ninety percent and 45% of isolates were resistant to clindamycin and erythromycin, respectively, and susceptibility to cefazolin was 71%. In addition, susceptibility to ampicillin and penicillin were 74% and 55%, respectively. The results showed that 82% of erythromycin-resistant, 92% clindamycin-resistant, and 78% of cefazolin-resistant isolates were eliminated by zinc oxide nanoparticles at a concentration of 100 mg/L of the nanoparticle. Furthermore, ZnONPs could inhibit all drug-resistant isolates at a concentration of 200 mg/mL (MIC90 ≥ 200). Conclusion: Since the drug resistance of group B streptococci against various antibiotics is increasing, determining and investigating the drug-resistance pattern of this bacterium to different antibiotics in order to prevent arbitrary consumption of antibiotics by pregnant women and ultimately prevent Infant mortality seems necessary. Generally, ZnONPs showed a high antimicrobial effect, and it was revealed that the bactericide effect increases upon the increase in the concentration of the nanoparticle.

Keywords: group B beta-hemolytic streptococcus, pregnant women, zinc oxide nanoparticles, drug resistance

Procedia PDF Downloads 67
432 Gold Nano Particle as a Colorimetric Sensor of HbA0 Glycation Products

Authors: Ranjita Ghoshmoulick, Aswathi Madhavan, Subhavna Juneja, Prasenjit Sen, Jaydeep Bhattacharya

Abstract:

Type 2 diabetes mellitus (T2DM) is a very complex and multifactorial metabolic disease where the blood sugar level goes up. One of the major consequence of this elevated blood sugar is the formation of AGE (Advance Glycation Endproducts), from a series of chemical or biochemical reactions. AGE are detrimental because it leads to severe pathogenic complications. They are a group of structurally diverse chemical compounds formed from nonenzymatic reactions between the free amino groups (-NH2) of proteins and carbonyl groups (>C=O) of reducing sugars. The reaction is known as Maillard Reaction. It starts with the formation of reversible schiff’s base linkage which after sometime rearranges itself to form Amadori Product along with dicarbonyl compounds. Amadori products are very unstable hence rearrangement goes on until stable products are formed. During the course of the reaction a lot of chemically unknown intermediates and reactive byproducts are formed that can be termed as Early Glycation Products. And when the reaction completes, structurally stable chemical compounds are formed which is termed as Advanced Glycation Endproducts. Though all glycation products have not been characterized well, some fluorescence compounds e.g pentosidine, Malondialdehyde (MDA) or carboxymethyllysine (CML) etc as AGE and α-dicarbonyls or oxoaldehydes such as 3-deoxyglucosone (3-DG) etc as the intermediates have been identified. In this work Gold NanoParticle (GNP) was used as an optical indicator of glycation products. To achieve faster glycation kinetics and high AGE accumulation, fructose was used instead of glucose. Hemoglobin A0 (HbA0) was fructosylated by in-vitro method. AGE formation was measured fluorimetrically by recording emission at 450nm upon excitation at 350nm. Thereafter this fructosylated HbA0 was fractionated by column chromatography. Fractionation separated the proteinaceous substance from the AGEs. Presence of protein part in the fractions was confirmed by measuring the intrinsic protein fluorescence and Bradford reaction. GNPs were synthesized using the templates of chromatographically separated fractions of fructosylated HbA0. Each fractions gave rise to GNPs of varying color, indicating the presence of distinct set of glycation products differing structurally and chemically. Clear solution appeared due to settling down of particles in some vials. The reactive groups of the intermediates kept the GNP formation mechanism on and did not lead to a stable particle formation till Day 10. Whereas SPR of GNP showed monotonous colour for the fractions collected in case of non fructosylated HbA0. Our findings accentuate the use of GNPs as a simple colorimetric sensing platform for the identification of intermediates of glycation reaction which could be implicated in the prognosis of the associated health risk due to T2DM and others.

Keywords: advance glycation endproducts, glycation, gold nano particle, sensor

Procedia PDF Downloads 287
431 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 59
430 Ternary Organic Blend for Semitransparent Solar Cells with Enhanced Short Circuit Current Density

Authors: Mohammed Makha, Jakob Heier, Frank Nüesch, Roland Hany

Abstract:

Organic solar cells (OSCs) have made rapid progress and currently achieve power conversion efficiencies (PCE) of over 10%. OSCs have several merits over other direct light-to-electricity generating cells and can be processed at low cost from solution on flexible substrates over large areas. Moreover, combining organic semiconductors with transparent and conductive electrodes allows for the fabrication of semitransparent OSCs (SM-OSCs). For SM-OSCs the challenge is to achieve a high average visible transmission (AVT) while maintaining a high short circuit current (Jsc). Typically, Jsc of SM-OSCs is smaller than when using an opaque metal top electrode. This is because the non-absorbed light during the first transit through the active layer and the transparent electrode is forward-transmitted out of the device. Recently, OSCs using a ternary blend of organic materials have received attention. This strategy was pursued to extend the light harvesting over the visible range. However, it is a general challenge to manipulate the performance of ternary OSCs in a predictable way, because many key factors affect the charge generation and extraction in ternary solar cells. Consequently, the device performance is affected by the compatibility between the blend components and the resulting film morphology, the energy levels and bandgaps, the concentration of the guest material and its location in the active layer. In this work, we report on a solvent-free lamination process for the fabrication of efficient and semitransparent ternary blend OSCs. The ternary blend was composed of PC70BM and the electron donors PBDTTT-C and an NIR cyanine absorbing dye (Cy7T). Using an opaque metal top electrode, a PCE of 6% was achieved for the optimized binary polymer: fullerene blend (AVT = 56%). However, the PCE dropped to ~2% when decreasing (to 30 nm) the active film thickness to increase the AVT value (75%). Therefore we resorted to the ternary blend and measured for non-transparent cells a PCE of 5.5% when using an active polymer: dye: fullerene (0.7: 0.3: 1.5 wt:wt:wt) film of 95 nm thickness (AVT = 65% when omitting the top electrode). In a second step, the optimized ternary blend was used of the fabrication of SM-OSCs. We used a plastic/metal substrate with a light transmission of over 90% as a transparent electrode that was applied via a lamination process. The interfacial layer between the active layer and the top electrode was optimized in order to improve the charge collection and the contact with the laminated top electrode. We demonstrated a PCE of 3% with AVT of 51%. The parameter space for ternary OSCs is large and it is difficult to find the best concentration ratios by trial and error. A rational approach for device optimization is the construction of a ternary blend phase diagram. We discuss our attempts to construct such a phase diagram for the PBDTTT-C: Cy7T: PC70BM system via a combination of using selective Cy7T selective solvents and atomic force microscopy. From the ternary diagram suitable morphologies for efficient light-to-current conversion can be identified. We compare experimental OSC data with these predictions.

Keywords: organic photovoltaics, ternary phase diagram, ternary organic solar cells, transparent solar cell, lamination

Procedia PDF Downloads 245
429 Ammonia Cracking: Catalysts and Process Configurations for Enhanced Performance

Authors: Frea Van Steenweghen, Lander Hollevoet, Johan A. Martens

Abstract:

Compared to other hydrogen (H₂) carriers, ammonia (NH₃) is one of the most promising carriers as it contains 17.6 wt% hydrogen. It is easily liquefied at ≈ 9–10 bar pressure at ambient temperature. More importantly, NH₃ is a carbon-free hydrogen carrier with no CO₂ emission at final decomposition. Ammonia has a well-defined regulatory framework and a good track record regarding safety concerns. Furthermore, the industry already has an existing transport infrastructure consisting of pipelines, tank trucks and shipping technology, as ammonia has been manufactured and distributed around the world for over a century. While NH₃ synthesis and transportation technological solutions are at hand, a missing link in the hydrogen delivery scheme from ammonia is an energy-lean and efficient technology for cracking ammonia into H₂ and N₂. The most explored option for ammonia decomposition is thermo-catalytic cracking which is, by itself, the most energy-efficient approach compared to other technologies, such as plasma and electrolysis, as it is the most energy-lean and robust option. The decomposition reaction is favoured only at high temperatures (> 300°C) and low pressures (1 bar) as the thermocatalytic ammonia cracking process is faced with thermodynamic limitations. At 350°C, the thermodynamic equilibrium at 1 bar pressure limits the conversion to 99%. Gaining additional conversion up to e.g. 99.9% necessitates heating to ca. 530°C. However, reaching thermodynamic equilibrium is infeasible as a sufficient driving force is needed, requiring even higher temperatures. Limiting the conversion below the equilibrium composition is a more economical option. Thermocatalytic ammonia cracking is documented in scientific literature. Among the investigated metal catalysts (Ru, Co, Ni, Fe, …), ruthenium is known to be most active for ammonia decomposition with an onset of cracking activity around 350°C. For establishing > 99% conversion reaction, temperatures close to 600°C are required. Such high temperatures are likely to reduce the round-trip efficiency but also the catalyst lifetime because of the sintering of the supported metal phase. In this research, the first focus was on catalyst bed design, avoiding diffusion limitation. Experiments in our packed bed tubular reactor set-up showed that extragranular diffusion limitations occur at low concentrations of NH₃ when reaching high conversion, a phenomenon often overlooked in experimental work. A second focus was thermocatalyst development for ammonia cracking, avoiding the use of noble metals. To this aim, candidate metals and mixtures were deposited on a range of supports. Sintering resistance at high temperatures and the basicity of the support were found to be crucial catalyst properties. The catalytic activity was promoted by adding alkaline and alkaline earth metals. A third focus was studying the optimum process configuration by process simulations. A trade-off between conversion and favorable operational conditions (i.e. low pressure and high temperature) may lead to different process configurations, each with its own pros and cons. For example, high-pressure cracking would eliminate the need for post-compression but is detrimental for the thermodynamic equilibrium, leading to an optimum in cracking pressure in terms of energy cost.

Keywords: ammonia cracking, catalyst research, kinetics, process simulation, thermodynamic equilibrium

Procedia PDF Downloads 45
428 The Effects of Adding Vibrotactile Feedback to Upper Limb Performance during Dual-Tasking and Response to Misleading Visual Feedback

Authors: Sigal Portnoy, Jason Friedman, Eitan Raveh

Abstract:

Introduction: Sensory substitution is possible due to the capacity of our brain to adapt to information transmitted by a synthetic receptor via an alternative sensory system. Practical sensory substitution systems are being developed in order to increase the functionality of individuals with sensory loss, e.g. amputees. For upper limb prosthetic-users the loss of tactile feedback compels them to allocate visual attention to their prosthesis. The effect of adding vibrotactile feedback (VTF) to the applied force has been studied, however its effect on the allocation if visual attention during dual-tasking and the response during misleading visual feedback have not been studied. We hypothesized that VTF will improve the performance and reduce visual attention during dual-task assignments in healthy individuals using a robotic hand and improve the performance in a standardized functional test, despite the presence of misleading visual feedback. Methods: For the dual-task paradigm, twenty healthy subjects were instructed to toggle two keyboard arrow keys with the left hand to retain a moving virtual car on a road on a screen. During the game, instructions for various activities, e.g. mix the sugar in the glass with a spoon, appeared on the screen. The subject performed these tasks with a robotic hand, attached to the right hand. The robotic hand was controlled by the activity of the flexors and extensors of the right wrist, recorded using surface EMG electrodes. Pressure sensors were attached at the tips of the robotic hand and induced VTF using vibrotactile actuators attached to the right arm of the subject. An eye-tracking system tracked to visual attention of the subject during the trials. The trials were repeated twice, with and without the VTF. Additionally, the subjects performed the modified box and blocks, hidden from eyesight, in a motion laboratory. A virtual presentation of a misleading visual feedback was be presented on a screen so that twice during the trial, the virtual block fell while the physical block was still held by the subject. Results: This is an ongoing study, which current results are detailed below. We are continuing these trials with transradial myoelectric prosthesis-users. In the healthy group, the VTF did not reduce the visual attention or improve performance during dual-tasking for the tasks that were typed transfer-to-target, e.g. place the eraser on the shelf. An improvement was observed for other tasks. For example, the average±standard deviation of time to complete the sugar-mixing task was 13.7±17.2s and 19.3±9.1s with and without the VTF, respectively. Also, the number of gaze shifts from the screen to the hand during this task were 15.5±23.7 and 20.0±11.6, with and without the VTF, respectively. The response of the subjects to the misleading visual feedback did not differ between the two conditions, i.e. with and without VTF. Conclusions: Our interim results suggest that the performance of certain activities of daily living may be improved by VTF. The substitution of visual sensory input by tactile feedback might require a long training period so that brain plasticity can occur and allow adaptation to the new condition.

Keywords: prosthetics, rehabilitation, sensory substitution, upper limb amputation

Procedia PDF Downloads 317
427 Understanding Language Teachers’ Motivations towards Research Engagement: A Qualitative Case Study of Vietnamese Tertiary English Teachers

Authors: My T. Truong

Abstract:

Among various professional development (PD) options available for English as a second language (ESL) teachers, especially those at the tertiary level, research engagement has been recently recommended as an innovative model with a transformative force for both individual teachers’ PD and wider school improvement. Teachers who conduct research themselves tend to develop critical and analytical thinking about their instructional practices, and enhance their ability to make autonomous pedagogical judgments and decisions. With such capabilities, teacher researchers are thus more likely to contribute to curriculum innovation of their schools and improvement of the whole educational process. The extent to which ESL teachers are engaged in research, however, depends largely on their research motivation, which can not only decide teachers’ choice of a PD activity to pursue but also affect the degree and duration of effort they are willing to invest in pursuing it. To understand language teachers’ research practices, and to inform educational authorities about ways to promote research culture among their ESL teaching staff, it is therefore vital to investigate teachers’ research motivation. Despite its importance as such, this individual difference construct has not been paid due attention especially in the ESL contexts. To fill this gap, this study aims to explore Vietnamese tertiary ESL teachers’ motivations towards research. Guided by the self-determination theory and the process model of motivation, it investigates teachers’ initial motivations for conducting research, and the factors that sustained or degraded their motivation during the research engagement process. Adopting a qualitative case-study approach, the study collected longitudinal data via semi-structured interviews and guided diary entries from three ESL tertiary teachers who were conducting their own research project. The respondents attended two semi-structured interviews (one at the beginning of their project, and the other one three months afterwards); and wrote six guided diary entries between the two interviews. The results confirm the significant role motivation plays in driving teachers to initiate and maintain their participation in research, and challenge some common assumptions in teacher motivation literature. For instance, the quality of the past and actual research experience unsurprisingly emerged as an important factor that both motivated and demotivated teachers in their research engagement process. Unlike general suggestions in the motivation literature however, external demand was found in this study to be a critical motivation sustaining factor while intrinsic research interest actually did not suffice to help a teacher fulfil his research endeavor. With such findings, the study is expected to widen the motivational perspective in understanding language teacher research practice given the paucity of related studies. Practically, it is hoped to enable teacher educators, PD program designers and educational policy makers in Vietnam and similar contexts to approach the question of whether and how to promote research activities among ESL teachers feasibly. For practicing and in-service teachers, the findings may elucidate to them the motivational conditions in which they can be research engaged, and the motivational factors that might hinder or encourage them in so doing.

Keywords: teacher motivation, teacher professional development, teacher research engagement, English as a second language (ESL)

Procedia PDF Downloads 161
426 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 130
425 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 52
424 Metal-Semiconductor Transition in Ultra-Thin Titanium Oxynitride Films Deposited by ALD

Authors: Farzan Gity, Lida Ansari, Ian M. Povey, Roger E. Nagle, James C. Greer

Abstract:

Titanium nitride (TiN) films have been widely used in variety of fields, due to its unique electrical, chemical, physical and mechanical properties, including low electrical resistivity, chemical stability, and high thermal conductivity. In microelectronic devices, thin continuous TiN films are commonly used as diffusion barrier and metal gate material. However, as the film thickness decreases below a few nanometers, electrical properties of the film alter considerably. In this study, the physical and electrical characteristics of 1.5nm to 22nm thin films deposited by Plasma-Enhanced Atomic Layer Deposition (PE-ALD) using Tetrakis(dimethylamino)titanium(IV), (TDMAT) chemistry and Ar/N2 plasma on 80nm SiO2 capped in-situ by 2nm Al2O3 are investigated. ALD technique allows uniformly-thick films at monolayer level in a highly controlled manner. The chemistry incorporates low level of oxygen into the TiN films forming titanium oxynitride (TiON). Thickness of the films is characterized by Transmission Electron Microscopy (TEM) which confirms the uniformity of the films. Surface morphology of the films is investigated by Atomic Force Microscopy (AFM) indicating sub-nanometer surface roughness. Hall measurements are performed to determine the parameters such as carrier mobility, type and concentration, as well as resistivity. The >5nm-thick films exhibit metallic behavior; however, we have observed that thin film resistivity is modulated significantly by film thickness such that there are more than 5 orders of magnitude increment in the sheet resistance at room temperature when comparing 5nm and 1.5nm films. Scattering effects at interfaces and grain boundaries could play a role in thickness-dependent resistivity in addition to quantum confinement effect that could occur at ultra-thin films: based on our measurements the carrier concentration is decreased from 1.5E22 1/cm3 to 5.5E17 1/cm3, while the mobility is increased from < 0.1 cm2/V.s to ~4 cm2/V.s for the 5nm and 1.5nm films, respectively. Also, measurements at different temperatures indicate that the resistivity is relatively constant for the 5nm film, while for the 1.5nm film more than 2 orders of magnitude reduction has been observed over the range of 220K to 400K. The activation energy of the 2.5nm and 1.5nm films is 30meV and 125meV, respectively, indicating that the TiON ultra-thin films are exhibiting semiconducting behaviour attributing this effect to a metal-semiconductor transition. By the same token, the contact is no longer Ohmic for the thinnest film (i.e., 1.5nm-thick film); hence, a modified lift-off process was developed to selectively deposit thicker films allowing us to perform electrical measurements with low contact resistance on the raised contact regions. Our atomic scale simulations based on molecular dynamic-generated amorphous TiON structures with low oxygen content confirm our experimental observations indicating highly n-type thin films.

Keywords: activation energy, ALD, metal-semiconductor transition, resistivity, titanium oxynitride, ultra-thin film

Procedia PDF Downloads 271