Search results for: playful work design
1629 Properties of the CsPbBr₃ Quantum Dots Treated by O₃ Plasma for Integration in the Perovskite Solar Cell
Authors: Sh. Sousani, Z. Shadrokh, M. Hofbauerová, J. Kollár, M. Jergel, P. Nádaždy, M. Omastová, E. Majková
Abstract:
Perovskite quantum dots (PQDs) have the potential to increase the performance of the perovskite solar cells (PSCs). The integration of PQDs into PSCs can extend the absorption range and enhance photon harvesting and device efficiency. In addition, PQDs can stabilize the device structure by passivating surface defects and traps in the perovskite layer and enhance its stability. The integration of PQDs into PSCs is strongly affected by the type of ligands on the surface of PQDs. The ligands affect the charge transport properties of PQDs, as well as the formation of well-defined interfaces and stability of PSCs. In this work, the CsPbBr₃ QDs were synthesized by the conventional hot-injection method using cesium oleate, PbBr₂, and two different ligands, namely oleic acid (OA)@oleylamine (OAm) and didodecyldimethylammonium bromide (DDAB). The STEM confirmed regular shape and relatively monodisperse cubic structure with an average size of about 10-14 nm of the prepared CsPbBr₃ QDs. Further, the photoluminescent (PL) properties of the PQDs/perovskite bilayer with the ligand OA@OAm and DDAB were studied. For this purpose, ITO/PQDs, as well as ITO/PQDs/MAPI perovskite structures, were prepared by spin coating, and the effect of the ligand and oxygen plasma treatment was analysed. The plasma treatment of the PQDs layer could be beneficial for the deposition of the MAPI perovskite layer and the formation of a well-defined PQDs/MAPI interface. The absorption edge in UV-Vis absorption spectra for OA@OAm CsPbBr₃ QDs is placed around 513 nm (the band gap 2.38 eV); for DDAB CsPbBr₃ QDs, it is located at 490 nm (the band gap 2.33 eV). The photoluminescence (PL) spectra of CsPbBr₃ QDs show two peaks located around 514 nm (503 nm) and 718 nm (708 nm) for OA@OAm (DDAB). The peak around 500 nm corresponds to the PL of PQDs, and the peak close to 710 nm belongs to the surface states of PQDs for both types of ligands. These surface states are strongly affected by the O₃ plasma treatment. For PQDs with DDAB ligand, the O₃ exposure (5, 10, 15 s) results in the blue shift of the PQDs peak and a non-monotonous change of the amplitude of the surface states' peak. For OA@OAm ligand, the O₃ exposition did not cause any shift of the PQDs peak, and the intensity of the PL peak related to the surface states is lower by one order of magnitude in comparison with DDAB, being affected by O₃ plasma treatment. The PL results indicate the possibility of tuning the position of the PL maximum by the ligand of the PQDs. Similar behaviour of the PQDs layer was observed for the ITO/QDs/MAPI samples, where an additional strong PL peak at 770 nm coming from the perovskite layer was observed; for the sample with PQDs with DDAB ligands, a small blue shift of the perovskite PL maximum was observed independently of the plasma treatment. These results suggest the possibility of affecting the PL maximum position and the surface states of the PQDs by the combination of a suitable ligand and the O₃ plasma treatment.Keywords: perovskite quantum dots, photoluminescence, O₃ plasma., perovskite solar cells
Procedia PDF Downloads 701628 Corrosion Analysis of Brazed Copper-Based Conducts in Particle Accelerator Water Cooling Circuits
Authors: A. T. Perez Fontenla, S. Sgobba, A. Bartkowska, Y. Askar, M. Dalemir Celuch, A. Newborough, M. Karppinen, H. Haalien, S. Deleval, S. Larcher, C. Charvet, L. Bruno, R. Trant
Abstract:
The present study investigates the corrosion behavior of copper (Cu) based conducts predominantly brazed with Sil-Fos (self-fluxing copper-based filler with silver and phosphorus) within various cooling circuits of demineralized water across different particle accelerator components at CERN. The study covers a range of sample service time, from a few months to fifty years, and includes various accelerator components such as quadrupoles, dipoles, and bending magnets. The investigation comprises the established sample extraction procedure, examination methodology including non-destructive testing, evaluation of the corrosion phenomena, and identification of commonalities across the studied components as well as analysis of the environmental influence. The systematic analysis included computed microtomography (CT) of the joints that revealed distributed defects across all brazing interfaces. Some defects appeared to result from areas not wetted by the filler during the brazing operation, displaying round shapes, while others exhibited irregular contours and radial alignment, indicative of a network or interconnection. The subsequent dry cutting performed facilitated access to the conduct's inner surface and the brazed joints for further inspection through light and electron microscopy (SEM) and chemical analysis via Energy Dispersive X-ray spectroscopy (EDS). Brazing analysis away from affected areas identified the expected phases for a Sil-Fos alloy. In contrast, the affected locations displayed micrometric cavities propagating into the material, along with selective corrosion of the bulk Cu initiated at the conductor-braze interface. Corrosion product analysis highlighted the consistent presence of sulfur (up to 6 % in weight), whose origin and role in the corrosion initiation and extension is being further investigated. The importance of this study is paramount as it plays a crucial role in comprehending the underlying factors contributing to recently identified water leaks and evaluating the extent of the issue. Its primary objective is to provide essential insights for the repair of impacted brazed joints when accessibility permits. Moreover, the study seeks to contribute to the improvement of design and manufacturing practices for future components, ultimately enhancing the overall reliability and performance of magnet systems within CERN accelerator facilities.Keywords: accelerator facilities, brazed copper conducts, demineralized water, magnets
Procedia PDF Downloads 461627 Liquid Food Sterilization Using Pulsed Electric Field
Authors: Tanmaya Pradhan, K. Midhun, M. Joy Thomas
Abstract:
Increasing the shelf life and improving the quality are important objectives for the success of packaged liquid food industry. One of the methods by which this can be achieved is by deactivating the micro-organisms present in the liquid food through pasteurization. Pasteurization is done by heating, but some serious disadvantages such as the reduction in food quality, flavour, taste, colour, etc. were observed because of heat treatment, which leads to the development of newer methods instead of pasteurization such as treatment using UV radiation, high pressure, nuclear irradiation, pulsed electric field, etc. In recent years the use of the pulsed electric field (PEF) for inactivation of the microbial content in the food is gaining popularity. PEF uses a very high electric field for a short time for the inactivation of microorganisms, for which we require a high voltage pulsed power source. Pulsed power sources used for PEF treatments are usually in the range of 5kV to 50kV. Different pulse shapes are used, such as exponentially decaying and square wave pulses. Exponentially decaying pulses are generated by high power switches with only turn-on capacity and, therefore, discharge the total energy stored in the capacitor bank. These pulses have a sudden onset and, therefore, a high rate of rising but have a very slow decay, which yields extra heat, which is ineffective in microbial inactivation. Square pulses can be produced by an incomplete discharge of a capacitor with the help of a switch having both on/off control or by using a pulse forming network. In this work, a pulsed power-based system is designed with the help of high voltage capacitors and solid-state switches (IGBT) for the inactivation of pathogenic micro-organism in liquid food such as fruit juices. The high voltage generator is based on the Marx generator topology, which can produce variable amplitude, frequency, and pulse width according to the requirements. Liquid food is treated in a chamber where pulsed electric field is produced between stainless steel electrodes using the pulsed output voltage of the supply. Preliminary bacterial inactivation tests were performed by subjecting orange juice inoculated with Escherichia Coli bacteria. With the help of the developed pulsed power source and the chamber, the inoculated orange has been PEF treated. The voltage was varied to get a peak electric field up to 15kV/cm. For a total treatment time of 200µs, a 30% reduction in the bacterial count has been observed. The detailed results and analysis will be presented in the final paper.Keywords: Escherichia coli bacteria, high voltage generator, microbial inactivation, pulsed electric field, pulsed forming line, solid-state switch
Procedia PDF Downloads 1841626 Food Security in Germany: Inclusion of the Private Sector through Law Reform Faces Challenges
Authors: Agnetha Schuchardt, Jennifer Hartmann, Laura Schulte, Roman Peperhove, Lars Gerhold
Abstract:
If critical infrastructures fail, even for a short period of time, it can have significant negative consequences for the affected population. This is especially true for the food sector that is strongly interlinked with other sectors like the power supply. A blackout could lead to several cities being without food supply for numerous days, simply because cash register systems do no longer work properly. Following the public opinion, securing the food supply in emergencies is considered a task of the state, however, in the German context, the key players are private enterprises and private households. Both are not aware of their responsibility and both cannot be forced to take any preventive measures prior to an emergency. This problem became evident to officials and politicians so that the law covering food security was revised in order to include private stakeholders into mitigation processes. The paper will present a scientific review of governmental and regulatory literature. The focus is the inclusion of the food industry through a law reform and the challenges that still exist. Together with legal experts, an analysis of regulations will be presented that explains the development of the law reform concerning food security and emergency storage in Germany. The main findings are that the existing public food emergency storage is out-dated, insufficient and too expensive. The state is required to protect food as a critical infrastructure but does not have the capacities to live up to this role. Through a law reform in 2017, new structures should to established. The innovation was to include the private sector into the civil defense concept since it has the required knowledge and experience. But the food industry is still reluctant. Preventive measures do not serve economic purposes – on the contrary, they cost money. The paper will discuss respective examples like equipping supermarkets with emergency power supply or self-sufficient cash register systems and why the state is not willing to cover the costs of these measures, but neither is the economy. The biggest problem with the new law is that private enterprises can only be forced to support food security if the state of emergency has occurred already and not one minute earlier. The paper will cover two main results: the literature review and an expert workshop that will be conducted in summer 2018 with stakeholders from different parts of the food supply chain as well as officials of the public food emergency concept. The results from this participative process will be presented and recommendations will be offered that show how the private economy could be better included into a modern food emergency concept (e. g. tax reductions for stockpiling).Keywords: critical infrastructure, disaster control, emergency food storage, food security, private economy, resilience
Procedia PDF Downloads 1861625 Detection of Alzheimer's Protein on Nano Designed Polymer Surfaces in Water and Artificial Saliva
Authors: Sevde Altuntas, Fatih Buyukserin
Abstract:
Alzheimer’s disease is responsible for irreversible neural damage of brain parts. One of the disease markers is Amyloid-β 1-42 protein that accumulates in the brain in the form plaques. The basic problem for detection of the protein is the low amount of protein that cannot be detected properly in body liquids such as blood, saliva or urine. To solve this problem, tests like ELISA or PCR are proposed which are expensive, require specialized personnel and can contain complex protocols. Therefore, Surface-enhanced Raman Spectroscopy (SERS) a good candidate for detection of Amyloid-β 1-42 protein. Because the spectroscopic technique can potentially allow even single molecule detection from liquid and solid surfaces. Besides SERS signal can be improved by using nanopattern surface and also is specific to molecules. In this context, our study proposes to fabricate diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin - T to detect low concentrations of Amyloid-β 1-42 protein in water and artificial saliva medium by the enhancement of protein SERS signal. The nanopatterned PC surface that was used to enhance SERS signal was fabricated by using Anodic Alumina Membranes (AAM) as a template. It is possible to produce AAMs with different column structures and varying thicknesses depending on voltage and anodization time. After fabrication process, the pore diameter of AAMs can be arranged with dilute acid solution treatment. In this study, two different columns structures were prepared. After a surface modification to decrease their surface energy, AAMs were treated with PC solution. Following the solvent evaporation, nanopatterned PC films with tunable pillared structures were peeled off from the membrane surface. The PC film was then modified with Au and Thioflavin-T for the detection of Amyloid-β 1-42 protein. The protein detection studies were conducted first in water via this biosensor platform. Same measurements were conducted in artificial saliva to detect the presence of Amyloid Amyloid-β 1-42 protein. SEM, SERS and contact angle measurements were carried out for the characterization of different surfaces and further demonstration of the protein attachment. SERS enhancement factor calculations were also completed via experimental results. As a result, our research group fabricated diagnostic test models that utilize Au-coated nanopatterned polycarbonate (PC) surfaces modified with Thioflavin-T to detect low concentrations of Alzheimer’s Amiloid – β protein in water and artificial saliva medium. This work was supported by The Scientific and Technological Research Council of Turkey (TUBITAK) Grant No: 214Z167.Keywords: alzheimer, anodic aluminum oxide, nanotopography, surface enhanced Raman spectroscopy
Procedia PDF Downloads 2911624 Assesment of Financial Performance: An Empirical Study of Crude Oil and Natural Gas Companies in India
Authors: Palash Bandyopadhyay
Abstract:
Background and significance of the study: Crude oil and natural gas is of crucial importance due to its increasing demand in India. The demand has been increased because of change of lifestyle overtime. Since India has poor utilization of oil production capacity, constantly the import of it has been increased progressively day by day. This ultimately hit the foreign exchange reserves of India, however it negatively affect the Indian economy as well. The financial performance of crude oil and natural gas companies in India has been trimmed down year after year because of underutilization of production capacity, enhancement of demand, change in life style, and change in import bill and outflows of foreign currencies. In this background, the current study seeks to measure the financial performance of crude oil and natural gas companies of India in the post liberalization period. Keeping in view of this, this study assesses the financial performance in terms of liquidity management, solvency, efficiency, financial stability, and profitability of the companies under study. Methodology: This research work is encircled on yearly ratio data collected from Centre for Monitoring Indian Economy (CMIE) Prowess database for the periods between 1993-94 and 2012-13 with 20 observations using liquidity, solvency and efficiency indicators, profitability indicators and financial stability indicators of all the major crude oil and natural gas companies in India. In the course of analysis, descriptive statistics, correlation statistics, and linear regression test have been utilized. Major findings: Descriptive statistics indicate that liquidity position is satisfactory in case of three crude oil and natural gas companies (Oil and Natural Gas Companies Videsh Limited, Oil India Limited and Selan exploration and transportation Limited) out of selected companies under study but solvency position is satisfactory only for one company (Oil and Natural Gas Companies Videsh Limited). However, efficiency analysis points out that Oil and Natural Gas Companies Videsh Limited performs effectively the management of inventory, receivables, and payables, but the overall liquidity management is not well. Profitability position is very much satisfactory in case of all the companies except Tata Petrodyne Limited, but profitability management is not satisfactory for all the companies under study. Financial stability analysis shows that all the companies are more dependent on debt capital, which bears a financial risk. Correlation and regression test results illustrates that profitability is positively and negatively associated with liquidity, solvency, efficiency, and financial stability indicators. Concluding statement: Management of liquidity and profitability of crude oil and natural gas companies in India should have been improved through controlling unnecessary imports in spite of the heavy demand of crude oil and natural gas in India and proper utilization of domestic oil reserves. At the same time, Indian government has to concern about rupee depreciation and interest rates.Keywords: financial performance, crude oil and natural gas companies, India, linear regression
Procedia PDF Downloads 3221623 Bi-objective Network Optimization in Disaster Relief Logistics
Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann
Abstract:
Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks
Procedia PDF Downloads 791622 Accessibility of Social Justice through Social Security in Indian Organisations: Analysis Based on Workforce
Authors: Neelima Rashmi Lakra
Abstract:
India was among one of the highly developed economy up to 1850 due to its cottage industries. During the end of the 18th century, modern industrial enterprises began with the first cotton mill in Bombay, the jute mill near Calcutta and the coal mine in Raniganj. This was counted as the real beginning of industry in 1854 in India. Prior to this period people concentrated only to agriculture, menial service or handicraft, and the introduction of industries exposed them to the disciplines of factory which was very tedious for them. With increasing number of factories been setup adding on to mining and introduction of railway, World War Period (1914-19), Second World War Period (1939-45) and the Great Depression (1929-33) there were visible change in the nature of work for the people, which resulted in outburst of strike for various reasons in these factories. Here, with India’s independence there was emergence of public sector industries and labour legislations were introduced. Meanwhile, trade unions came to notice to the rescue of the oppressed but failed to continue till long. Soon after, with the New Economic Policy organisations came across to face challenges to perform their best, where social justice for the workmen was in question. On these backdrops, studies were found discussing the central human capabilities which could be addressed through Social Security schemes. Therefore, this study was taken up to look at the reforms and legislations mainly meant for the welfare of the labour. This paper will contribute to the large number of Indian population who are serving in public sectors in India since the introduction of industries and will complement the issue of social justice through social security measures among this huge crowd serving the nation. The objectives of the study include; to find out what labour Legislations have already been existing in India, the role of Trade Union Movement, to look at the effects of New Economic Policy on these reforms and its effects and measures taken for the workforce employed in the public sectors and finally, if these measures fulfil the social justice aspects for the larger society on whole. The methodology followed collection of data from books, journal articles, reports, company reports and manuals focusing mainly on Indian studies and the data was analysed following content analysis method. The findings showed the measures taken for Social Security, but there were also reflections of very few particular additions or amendments to these Acts and provisions with the onset of New Liberalisation Policy. Therefore, the study concluded examining the social justice aspects in the context of a developing economy and discussing the recommendations.Keywords: public sectors, social justice, social security schemes, trade union movement
Procedia PDF Downloads 4501621 Intersection of Racial and Gender Microaggressions: Social Support as a Coping Strategy among Indigenous LGBTQ People in Taiwan
Authors: Ciwang Teyra, A. H. Y. Lai
Abstract:
Introduction: Indigenous LGBTQ individuals face with significant life stress such as racial and gender discrimination and microaggressions, which may lead to negative impacts of their mental health. Although studies relevant to Taiwanese indigenous LGBTQpeople gradually increase, most of them are primarily conceptual or qualitative in nature. This research aims to fulfill the gap by offering empirical quantitative evidence, especially investigating the impact of racial and gender microaggressions on mental health among Taiwanese indigenous LGBTQindividuals with an intersectional perspective, as well as examine whether social support can help them to cope with microaggressions. Methods: Participants were (n=200; mean age=29.51; Female=31%, Male=61%, Others=8%). A cross-sectional quantitative design was implemented using data collected in the year 2020. Standardised measurements was used, including Racial Microaggression Scale (10 items), Gender Microaggression Scale (9 items), Social Support Questionnaire-SF(6 items); Patient Health Questionnaire(9-item); and Generalised Anxiety Disorder(7-item). Covariates were age, gender, and perceived economic hardships. Structural equation modelling (SEM) was employed using Mplus 8.0 with the latent variables of depression and anxiety as outcomes. A main effect SEM model was first established (Model1).To test the moderation effects of perceived social support, an interaction effect model (Model 2) was created with interaction terms entered into Model1. Numerical integration was used with maximum likelihood estimation to estimate the interaction model. Results: Model fit statistics of the Model 1:X2(df)=1308.1 (795), p<.05; CFI/TLI=0.92/0.91; RMSEA=0.06; SRMR=0.06. For Model, the AIC and BIC values of Model 2 improved slightly compared to Model 1(AIC =15631 (Model1) vs. 15629 (Model2); BIC=16098 (Model1) vs. 16103 (Model2)). Model 2 was adopted as the final model. In main effect model 1, racialmicroaggressionand perceived social support were associated with depression and anxiety, but not sexual orientation microaggression(Indigenous microaggression: b = 0.27 for depression; b=0.38 for anxiety; Social support: b=-0.37 for depression; b=-0.34 for anxiety). Thus, an interaction term between social support and indigenous microaggression was added in Model 2. In the final Model 2, indigenous microaggression and perceived social support continues to be statistically significant predictors of both depression and anxiety. Social support moderated the effect of indigenous microaggression of depression (b=-0.22), but not anxiety. All covariates were not statistically significant. Implications: Results indicated that racial microaggressions have a significant impact on indigenous LGBTQ people’s mental health. Social support plays as a crucial role to buffer the negative impact of racial microaggression. To promote indigenous LGBTQ people’s wellbeing, it is important to consider how to support them to develop social support network systems.Keywords: microaggressions, intersectionality, indigenous population, mental health, social support
Procedia PDF Downloads 1461620 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1191619 Bad Juju: The Translation of the African Zombi to Nigerian and Western Screens
Authors: Randall Gray Underwood
Abstract:
Within the past few decades, zombie cinema has evolved from a niche outgrowth of the horror genre into one of the most widely-discussed and thoroughly-analyzed subgenres of film. Rising to international popularity during the 1970s and 1980s following the release of George Romero’s landmark classic, Night of the Living Dead (1968), and its much-imitated sequel, Dawn of the Dead (1978), the zombie genre returned to global screens in full force at the turn of the century following earth-shattering events such as the 9/11 terrorist attacks, America’s subsequent war in the Middle East, environmental pandemics, and the emergence of a divided and disconnected global populace in the age of social media. Indeed, the presence of the zombie in all manner of art and entertainment—movies, literature, television, video games, comic books, and more—has become nothing short of pervasive, engendering a plethora of scholarly writings, books, opinion pieces, and video essays from all manner of academics, cultural commentators, critics, and casual fans, with each espousing their own theories regarding the zombie’s allegorical and symbolic value within global fiction. Consequently, the walking dead of recent years have been variously positioned as fictive manifestations of human fears of societal collapse, environmental contagion, sexually-transmitted disease, primal regression, dwindling population rates, global terrorism, and the foreign “Other”. Less commonly analyzed within film scholarship, however, is the connection between the zombie’s folkloric roots and native African/Haitian spiritual practice; specifically, how this connection impacts the zombie’s presentation in African films by native storytellers versus in similar narratives told from a western perspective. This work will examine the unlikely connections and contrasts inherent the portrayal of the traditional African/Haitian zombie (or zombi, in Haitian French) in the Nollywood film Witchdoctor of the Livingdead (1985, Charles Abi Enonchong) versus its depiction in the early Hollywood films White Zombie (1932, Victor Halperin) and I Walked with a Zombie (1943, Jacques Tourneur), through analysis of each cinemas’ use of the zombie as a visual metaphor for subjugation/slavery, as well as differences in their representation of the the spiritual folklore from which the figure of the zombie originates. Select films from the post-Night of the Living Dead zombie cinema landscape will also warrant brief discussion in relation to Witchdoctor of the Livingdead.Keywords: Nollywood, Zombie cinema, Horror cinema, Classical Hollywood
Procedia PDF Downloads 601618 The Impact of Sensory Overload on Students on the Autism Spectrum in Italian Inclusive Classrooms: Teachers' Perspectives and Training Needs
Authors: Paola Molteni, Luigi d’Alonzo
Abstract:
Background: Sensory issues are now considered one of the key aspects in defining and diagnosing autism, changing the perspectives on behavioural analysis and intervention in mainstream educational services. However, Italian teachers’ training is yet not specific on the topic of autism and its sensory-related effects and this research investigates the teacher’s capability in understanding the student’s needs and his/her challenging behaviours considering sensory perceptions. Objectives: The research aims to analyse mainstream schools teachers’ awareness on students’ sensory perceptions and how this affects classroom inclusion and learning process. The research questions are: i) Are teachers able to identify student’s sensory issues?; ii) Are trained teachers more able to identify sensory problems then untrained ones?; iii) What is the impact of sensory issues on inclusion in mainstream classrooms?; iv) What should teachers know about autistic sensory dimensions? Methods: This research was designed as a pilot study that involves a multi-methods approach, including action and collaborative research methodology. The designed research allows the researcher to catch the complexity of a province school district (from kindergarten to high school) through a deep detailed analysis of selected aspects. The researcher explored the questions described above through 133 questionnaires and 6 focus groups. The qualitative and quantitative data collected during the research were analysed using the Interpretative Phenomenological Analysis (IPA). Results: Mainstream schools teachers are not able to confidently recognise sensory issues of children included in the classroom. The research underlines: how professionals with no specific training on autism are not able to recognise sensory problems in students on the spectrum; how hearing and sight issues have higher impact on classroom inclusion and student’s learning process; how a lack of understanding is often followed by misinterpretations of the impact of sensory issues and challenging behaviours. Conclusions: As this research has shown, promoting and enhancing the importance of understanding sensory issues related to autism is fundamental to enable mainstream schools teachers to define educational and life-long plans able to properly answer the student’s needs and support his/her real inclusion in the classroom. This study is a good example of how the educational research can meet and help the daily practice in working with people on the autism spectrum and support the training design for mainstream school teachers: the emerging need of designed preparation on sensory issues is fundamental to be considered when planning school district in-service training programmes, specifically declined for inclusive services.Keywords: autism spectrum condition, scholastic inclusion, sensory overload, teacher's training
Procedia PDF Downloads 3171617 Ex-vivo Bio-distribution Studies of a Potential Lung Perfusion Agent
Authors: Shabnam Sarwar, Franck Lacoeuille, Nadia Withofs, Roland Hustinx
Abstract:
After the development of a potential surrogate of MAA, and its successful application for the diagnosis of pulmonary embolism in artificially embolized rats’ lungs, this microparticulate system were radiolabelled with gallium-68 to synthesize 68Ga-SBMP with high radiochemical purity >99%. As a prerequisite step of clinical trials, 68Ga- labelled starch based microparticles (SBMP) were analysed for their in-vivo behavior in small animals. The purpose of the presented work includes the ex-vivo biodistribution studies of 68Ga-SBMP in order to assess the activity uptake in target organs with respect to time, excretion pathways of the radiopharmaceutical, %ID/g in major organs, T/NT ratios, in-vivo stability of the radiotracer and subsequently the microparticles in the target organs. Radiolabelling of starch based microparticles was performed by incubating it with 68Ga generator eluate (430±26 MBq) at room temperature and pressure without using any harsh reaction condition. For Ex-vivo biodistribution studies healthy White Wistar rats weighing between 345-460 g were injected intravenously 68Ga-SBMP 20±8 MBq, containing about 2,00,000-6,00,000 SBMP particles in a volume of 700µL. The rats were euthanized at predefined time intervals (5min, 30min, 60min and 120min) and their organ parts were cut, washed, and put in the pre-weighed tubes and measured for radioactivity counts through automatic Gamma counter. The 68Ga-SBMP produced >99% RCP just after 10-20 min incubation through a simple and robust procedure. Biodistribution of 68Ga-SBMP showed that initially just after 5 min post injection major uptake was observed in the lungs following by blood, heart, liver, kidneys, bladder, urine, spleen, stomach, small intestine, colon, skin and skeleton, thymus and at last the smallest activity was found in brain. Radioactivity counts stayed stable in lungs with gradual decrease with the passage of time, and after 2h post injection, almost half of the activity were seen in lungs. This is a sufficient time to perform PET/CT lungs scanning in humans while activity in the liver, spleen, gut and urinary system decreased with time. The results showed that urinary system is the excretion pathways instead of hepatobiliary excretion. There was a high value of T/NT ratios which suggest fine tune images for PET/CT lung perfusion studies henceforth further pre-clinical studies and then clinical trials should be planned in order to utilize this potential lung perfusion agent.Keywords: starch based microparticles, gallium-68, biodistribution, target organs, excretion pathways
Procedia PDF Downloads 1731616 Fostering Students’ Cultural Intelligence: A Social Media Experiential Project
Authors: Lorena Blasco-Arcas, Francesca Pucciarelli
Abstract:
Business contexts have become globalised and digitalised, which requires that managers develop a strong sense of cross-cultural intelligence while working in geographically distant teams by means of digital technologies. How to better equip future managers on these kinds of skills has been put forward as a critical issue in Business Schools. In pursuing these goals, higher education is shifting from a passive lecture approach, to more active and experiential learning approaches that are more suitable to learn skills. For example, through the use of case studies, proposing plausible business problem to be solved by students (or teams of students), these institutions have focused for long in fostering learning by doing. Though, case studies are no longer enough as a tool to promote active teamwork and experiential learning. Moreover, digital advancements applied to educational settings have enabled augmented classrooms, expanding the learning experience beyond the class, which increase students’ engagement and experiential learning. Different authors have highlighted the benefits of digital engagement in order to achieve a deeper and longer-lasting learning and comprehension of core marketing concepts. Clickers, computer-based simulations and business games have become fairly popular between instructors, but still are limited by the fact that are fictional experiences. Further exploration of real digital platforms to implement real, live projects in the classroom seem relevant for marketing and business education. Building on this, this paper describes the development of an experiential learning activity in class, in which students developed a communication campaign in teams using the BuzzFeed platform, and subsequently implementing the campaign by using other social media platforms (e.g. Facebook, Instagram, Twitter…). The article details the procedure of using the project for a marketing module in a Bachelor program with students located in France, Italy and Spain campuses working on multi-campus groups. Further, this paper describes the project outcomes in terms of students’ engagement and analytics (i.e. visits achieved). the project included a survey in order to analyze and identify main aspects related to how the learning experience is influenced by the cultural competence developed through working in geographically distant and culturally diverse teamwork. Finally, some recommendations to use project-based social media tools while working with virtual teamwork in the classroom are provided.Keywords: cultural competences, experiential learning, social media, teamwork, virtual group work
Procedia PDF Downloads 1791615 Deconstructing and Reconstructing the Definition of Inhuman Treatment in International Law
Authors: Sonia Boulos
Abstract:
The prohibition on ‘inhuman treatment’ constitutes one of the central tenets of modern international human rights law. It is incorporated in principal international human rights instruments including Article 5 of the Universal Declaration of Human Rights, and Article 7 of the International Covenant on Civil and Political Rights. However, in the absence of any legislative definition of the term ‘inhuman’, its interpretation becomes challenging. The aim of this article is to critically analyze the interpretation of the term ‘inhuman’ in international human rights law and to suggest a new approach to construct its meaning. The article is composed of two central parts. The first part is a critical appraisal of the interpretation of the term ‘inhuman’ by supra-national human rights law institutions. It highlights the failure of supra-national institutions to provide an independent definition for the term ‘inhuman’. In fact, those institutions consistently fail to distinguish the term ‘inhuman’ from its other kin terms, i.e. ‘cruel’ and ‘degrading.’ Very often, they refer to these three prohibitions as ‘CIDT’, as if they were one collective. They were primarily preoccupied with distinguishing ‘CIDT’ from ‘torture.’ By blurring the conceptual differences between these three terms, supra-national institutions supplemented them with a long list of specific and purely descriptive subsidiary rules. In most cases, those subsidiary rules were announced in the absence of sufficient legal reasoning explaining how they were derived from abstract and evaluative standards embodied in the prohibitions collectively referred to as ‘CIDT.’ By opting for this option, supra-national institutions have created the risk for the development of an incoherent body of jurisprudence on those terms at the international level. They also have failed to provide guidance for domestic courts on how to enforce these prohibitions. While blurring the differences between the terms ‘cruel,’ ‘inhuman,’ and ‘degrading’ has consequences for the three, the term ‘inhuman’ remains the most impoverished one. It is easy to link the term ‘cruel’ to the clause on ‘cruel and unusual punishment’ originating from the English Bill of Rights of 1689. It is also easy to see that the term ‘degrading’ reflects a dignatarian ideal. However, when we turn to the term ‘inhuman’, we are left without any interpretative clue. The second part of the article suggests that the ordinary meaning of the word ‘inhuman’ should be our first clue. However, regaining the conceptual independence of the term ‘inhuman’ requires more than a mere reflection on the word-meaning of the term. Thus, the second part introduces philosophical concepts related to the understanding of what it means to be human. It focuses on ‘the capabilities approach’ and the notion of ‘human functioning’, introduced by Amartya Sen and further explored by Martha Nussbaum. Nussbaum’s work on the basic human capabilities is particularly helpful or even vital for understanding the moral and legal substance of the prohibition on ‘inhuman’ treatment.Keywords: inhuman treatment, capabilities approach, human functioning, supra-national institutions
Procedia PDF Downloads 2781614 Improving Student Learning in a Math Bridge Course through Computer Algebra Systems
Authors: Alejandro Adorjan
Abstract:
Universities are motivated to understand the factor contributing to low retention of engineering undergraduates. While precollege students for engineering increases, the number of engineering graduates continues to decrease and attrition rates for engineering undergraduates remains high. Calculus 1 (C1) is the entry point of most undergraduate Engineering Science and often a prerequisite for Computing Curricula courses. Mathematics continues to be a major hurdle for engineering students and many students who drop out from engineering cite specifically Calculus as one of the most influential factors in that decision. In this context, creating course activities that increase retention and motivate students to obtain better final results is a challenge. In order to develop several competencies in our students of Software Engineering courses, Calculus 1 at Universidad ORT Uruguay focuses on developing several competencies such as capacity of synthesis, abstraction, and problem solving (based on the ACM/AIS/IEEE). Every semester we try to reflect on our practice and try to answer the following research question: What kind of teaching approach in Calculus 1 can we design to retain students and obtain better results? Since 2010, Universidad ORT Uruguay offers a six-week summer noncompulsory bridge course of preparatory math (to bridge the math gap between high school and university). Last semester was the first time the Department of Mathematics offered the course while students were enrolled in C1. Traditional lectures in this bridge course lead to just transcribe notes from blackboard. Last semester we proposed a Hands On Lab course using Geogebra (interactive geometry and Computer Algebra System (CAS) software) as a Math Driven Development Tool. Students worked in a computer laboratory class and developed most of the tasks and topics in Geogebra. As a result of this approach, several pros and cons were found. It was an excessive amount of weekly hours of mathematics for students and, as the course was non-compulsory; the attendance decreased with time. Nevertheless, this activity succeeds in improving final test results and most students expressed the pleasure of working with this methodology. This teaching technology oriented approach strengthens student math competencies needed for Calculus 1 and improves student performance, engagement, and self-confidence. It is important as a teacher to reflect on our practice, including innovative proposals with the objective of engaging students, increasing retention and obtaining better results. The high degree of motivation and engagement of participants with this methodology exceeded our initial expectations, so we plan to experiment with more groups during the summer so as to validate preliminary results.Keywords: calculus, engineering education, PreCalculus, Summer Program
Procedia PDF Downloads 2901613 Effects of Oxytocin on Neural Response to Facial Emotion Recognition in Schizophrenia
Authors: Avyarthana Dey, Naren P. Rao, Arpitha Jacob, Chaitra V. Hiremath, Shivarama Varambally, Ganesan Venkatasubramanian, Rose Dawn Bharath, Bangalore N. Gangadhar
Abstract:
Objective: Impaired facial emotion recognition is widely reported in schizophrenia. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. However, its effect on facial emotion recognition deficits seen in schizophrenia is not well explored. In this study, we examined the effect of intranasal OXT on processing facial emotions and its neural correlates in patients with schizophrenia. Method: 12 male patients (age= 31.08±7.61 years, education= 14.50±2.20 years) participated in this single-blind, counterbalanced functional magnetic resonance imaging (fMRI) study. All participants underwent three fMRI scans; one at baseline, one each after single dose 24IU intranasal OXT and intranasal placebo. The order of administration of OXT and placebo were counterbalanced and subject was blind to the drug administered. Participants performed a facial emotion recognition task presented in a block design with six alternating blocks of faces and shapes. The faces depicted happy, angry or fearful emotions. The images were preprocessed and analyzed using SPM 12. First level contrasts comparing recognition of emotions and shapes were modelled at individual subject level. A group level analysis was performed using the contrasts generated at the first level to compare the effects of intranasal OXT and placebo. The results were thresholded at uncorrected p < 0.001 with a cluster size of 6 voxels. Neuropeptide oxytocin is known to modulate brain regions involved in facial emotion recognition, namely amygdala, in healthy volunteers. Results: Compared to placebo, intranasal OXT attenuated activity in inferior temporal, fusiform and parahippocampal gyri (BA 20), premotor cortex (BA 6), middle frontal gyrus (BA 10) and anterior cingulate gyrus (BA 24) and enhanced activity in the middle occipital gyrus (BA 18), inferior occipital gyrus (BA 19), and superior temporal gyrus (BA 22). There were no significant differences between the conditions on the accuracy scores of emotion recognition between baseline (77.3±18.38), oxytocin (82.63 ± 10.92) or Placebo (76.62 ± 22.67). Conclusion: Our results provide further evidence to the modulatory effect of oxytocin in patients with schizophrenia. Single dose oxytocin resulted in significant changes in activity of brain regions involved in emotion processing. Future studies need to examine the effectiveness of long-term treatment with OXT for emotion recognition deficits in patients with schizophrenia.Keywords: recognition, functional connectivity, oxytocin, schizophrenia, social cognition
Procedia PDF Downloads 2201612 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 971611 Road Accidents to School Children’s in Dar Es Salaam, Tanzania
Authors: Kabuga Daniel
Abstract:
Road accidents resulting to deaths and injuries have become a new public health challenge especially in developing countries including Tanzania. Reports from Tanzania Traffic Police Force shows that last year 2016 accidents increased compare to previous year 2015, accident happened from 3710 up to 5219, accidents and safety data indicate that children are the most vulnerable to road crashes where 78 pupils died and 182 others were seriously injured in separate roads accident last year. A survey done by Amend indicates that Pupil mode of transport in Dar es salaam schools are by walk 87%, bus 9.21%, car 1.32%, motorcycle 0.88%, 3-wheeler 0.24%, train 0.14%, bicycle 0.10%, ferry 0.07%, and combined mode 0.44%. According to this study, majority of school children’s uses walking mode, most of school children’s agreed to continue using walking mode and request to have signs for traffic control during crossing road like STOP sign and CHILD CROSSING sign for safe crossing. Because children not only sit inside this buses (Daladala) but also they walk in a group to/from school, and few (33.2%) parents or adults are willing to supervise their children’s during working to school while 50% of parents agree to let their children walking alone to school if the public transport started from nearby street. The study used both qualitative and quantitative methods of research by conducting physical surveying on sample districts. The main objectives of this research are to carries out all factors affecting school children’s when they use public road, to promote and encourage the safe use of public road by all classes especially pupil or student through the circulation of advice, information and knowledge gain from research and to recommends future direction for the developments for road design or plan to vulnerable users. The research also critically analyze the problems causing death and injuries to school children’s in Dar es Salaam Region. This study determines the relationship between road traffic accidents and factors, such as socio-economic, status, and distance from school, number of sibling, behavioral problems, knowledge and attitudes of public and their parents towards road safety and parent educational study traffic. The study comes up with some of recommendations including Infrastructure Improvements like, safe footpaths, Safe crossings, Speed humps, Speed limits, Road signs. However, Planners and policymakers wishing to increase walking and cycling among children need to consider options that address distance constraints, the land use planners and transport professionals use better understanding of the various factors that affect children’s choices of school travel mode, results suggest that all school travel attributes should be considered during school location.Keywords: accidents, childrens, school, Tanzania
Procedia PDF Downloads 2431610 The Sea Striker: The Relevance of Small Assets Using an Integrated Conception with Operational Performance Computations
Authors: Gaëtan Calvar, Christophe Bouvier, Alexis Blasselle
Abstract:
This paper presents the Sea Striker, a compact hydrofoil designed with the goal to address some of the issues raised by the recent evolutions of naval missions, threats and operation theatres in modern warfare. Able to perform a wide range of operations, the Sea Striker is a 40-meter stealth surface combatant equipped with a gas turbine and aft and forward foils to reach high speeds. The Sea Striker's stealthiness is enabled by the combination of composite structure, exterior design, and the advanced integration of sensors. The ship is fitted with a powerful and adaptable combat system, ensuring a versatile and efficient response to modern threats. Lightly Manned with a core crew of 10, this hydrofoil is highly automated and can be remoted pilote for special force operation or transit. Such a kind of ship is not new: it has been used in the past by different navies, for example, by the US Navy with the USS Pegasus. Nevertheless, the recent evolutions in science and technologies on the one hand, and the emergence of new missions, threats and operation theatres, on the other hand, put forward its concept as an answer to nowadays operational challenges. Indeed, even if multiples opinions and analyses can be given regarding the modern warfare and naval surface operations, general observations and tendencies can be drawn such as the major increase in the sensors and weapons types and ranges and, more generally, capacities; the emergence of new versatile and evolving threats and enemies, such as asymmetric groups, swarm drones or hypersonic missile; or the growing number of operation theatres located in more coastal and shallow waters. These researches were performed with a complete study of the ship after several operational performance computations in order to justify the relevance of using ships like the Sea Striker in naval surface operations. For the selected scenarios, the conception process enabled to measure the performance, namely a “Measure of Efficiency” in the NATO framework for 2 different kinds of models: A centralized, classic model, using large and powerful ships; and A distributed model relying on several Sea Strikers. After this stage, a was performed. Lethal, agile, stealth, compact and fitted with a complete set of sensors, the Sea Striker is a new major player in modern warfare and constitutes a very attractive response between the naval unit and the combat helicopter, enabling to reach high operational performances at a reduced cost.Keywords: surface combatant, compact, hydrofoil, stealth, velocity, lethal
Procedia PDF Downloads 1171609 Quality Improvement of the Sand Moulding Process in Foundries Using Six Sigma Technique
Authors: Cindy Sithole, Didier Nyembwe, Peter Olubambi
Abstract:
The sand casting process involves pattern making, mould making, metal pouring and shake out. Every step in the sand moulding process is very critical for production of good quality castings. However, waste generated during the sand moulding operation and lack of quality are matters that influences performance inefficiencies and lack of competitiveness in South African foundries. Defects produced from the sand moulding process are only visible in the final product (casting) which results in increased number of scrap, reduced sales and increases cost in the foundry. The purpose of this Research is to propose six sigma technique (DMAIC, Define, Measure, Analyze, Improve and Control) intervention in sand moulding foundries and to reduce variation caused by deficiencies in the sand moulding process in South African foundries. Its objective is to create sustainability and enhance productivity in the South African foundry industry. Six sigma is a data driven method to process improvement that aims to eliminate variation in business processes using statistical control methods .Six sigma focuses on business performance improvement through quality initiative using the seven basic tools of quality by Ishikawa. The objectives of six sigma are to eliminate features that affects productivity, profit and meeting customers’ demands. Six sigma has become one of the most important tools/techniques for attaining competitive advantage. Competitive advantage for sand casting foundries in South Africa means improved plant maintenance processes, improved product quality and proper utilization of resources especially scarce resources. Defects such as sand inclusion, Flashes and sand burn on were some of the defects that were identified as resulting from the sand moulding process inefficiencies using six sigma technique. The courses were we found to be wrong design of the mould due to the pattern used and poor ramming of the moulding sand in a foundry. Six sigma tools such as the voice of customer, the Fishbone, the voice of the process and process mapping were used to define the problem in the foundry and to outline the critical to quality elements. The SIPOC (Supplier Input Process Output Customer) Diagram was also employed to ensure that the material and process parameters were achieved to ensure quality improvement in a foundry. The process capability of the sand moulding process was measured to understand the current performance to enable improvement. The Expected results of this research are; reduced sand moulding process variation, increased productivity and competitive advantage.Keywords: defects, foundries, quality improvement, sand moulding, six sigma (DMAIC)
Procedia PDF Downloads 1951608 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound
Procedia PDF Downloads 3661607 Deasphalting of Crude Oil by Extraction Method
Authors: A. N. Kurbanova, G. K. Sugurbekova, N. K. Akhmetov
Abstract:
The asphaltenes are heavy fraction of crude oil. Asphaltenes on oilfield is known for its ability to plug wells, surface equipment and pores of the geologic formations. The present research is devoted to the deasphalting of crude oil as the initial stage refining oil. Solvent deasphalting was conducted by extraction with organic solvents (cyclohexane, carbon tetrachloride, chloroform). Analysis of availability of metals was conducted by ICP-MS and spectral feature at deasphalting was achieved by FTIR. High contents of asphaltenes in crude oil reduce the efficiency of refining processes. Moreover, high distribution heteroatoms (e.g., S, N) were also suggested in asphaltenes cause some problems: environmental pollution, corrosion and poisoning of the catalyst. The main objective of this work is to study the effect of deasphalting process crude oil to improve its properties and improving the efficiency of recycling processes. Experiments of solvent extraction are using organic solvents held in the crude oil JSC “Pavlodar Oil Chemistry Refinery. Experimental results show that deasphalting process also leads to decrease Ni, V in the composition of the oil. One solution to the problem of cleaning oils from metals, hydrogen sulfide and mercaptan is absorption with chemical reagents directly in oil residue and production due to the fact that asphalt and resinous substance degrade operational properties of oils and reduce the effectiveness of selective refining of oils. Deasphalting of crude oil is necessary to separate the light fraction from heavy metallic asphaltenes part of crude oil. For this oil is pretreated deasphalting, because asphaltenes tend to form coke or consume large quantities of hydrogen. Removing asphaltenes leads to partly demetallization, i.e. for removal of asphaltenes V/Ni and organic compounds with heteroatoms. Intramolecular complexes are relatively well researched on the example of porphyinous complex (VO2) and nickel (Ni). As a result of studies of V/Ni by ICP MS method were determined the effect of different solvents-deasphalting – on the process of extracting metals on deasphalting stage and select the best organic solvent. Thus, as the best DAO proved cyclohexane (C6H12), which as a result of ICP MS retrieves V-51.2%, Ni-66.4%? Also in this paper presents the results of a study of physical and chemical properties and spectral characteristics of oil on FTIR with a view to establishing its hydrocarbon composition. Obtained by using IR-spectroscopy method information about the specifics of the whole oil give provisional physical, chemical characteristics. They can be useful in the consideration of issues of origin and geochemical conditions of accumulation of oil, as well as some technological challenges. Systematic analysis carried out in this study; improve our understanding of the stability mechanism of asphaltenes. The role of deasphalted crude oil fractions on the stability asphaltene is described.Keywords: asphaltenes, deasphalting, extraction, vanadium, nickel, metalloporphyrins, ICP-MS, IR spectroscopy
Procedia PDF Downloads 2421606 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 541605 Argos System: Improvements and Future of the Constellation
Authors: Sophie Baudel, Aline Duplaa, Jean Muller, Stephan Lauriol, Yann Bernard
Abstract:
Argos is the main satellite telemetry system used by the wildlife research community, since its creation in 1978, for animal tracking and scientific data collection all around the world, to analyze and understand animal migrations and behavior. The marine mammals' biology is one of the major disciplines which had benefited from Argos telemetry, and conversely, marine mammals biologists’ community has contributed a lot to the growth and development of Argos use cases. The Argos constellation with 6 satellites in orbit in 2017 (Argos 2 payload on NOAA 15, NOAA 18, Argos 3 payload on NOAA 19, SARAL, METOP A and METOP B) is being extended in the following years with Argos 3 payload on METOP C (launch in October 2018), and Argos 4 payloads on Oceansat 3 (launch in 2019), CDARS in December 2021 (to be confirmed), METOP SG B1 in December 2022, and METOP-SG-B2 in 2029. Argos 4 will allow more frequency bands (600 kHz for Argos4NG, instead of 110 kHz for Argos 3), new modulation dedicated to animal (sea turtle) tracking allowing very low transmission power transmitters (50 to 100mW), with very low data rates (124 bps), enhancement of high data rates (1200-4800 bps), and downlink performance, at the whole contribution to enhance the system capacity (50,000 active beacons per month instead of 20,000 today). In parallel of this ‘institutional Argos’ constellation, in the context of a miniaturization trend in the spatial industry in order to reduce the costs and multiply the satellites to serve more and more societal needs, the French Space Agency CNES, which designs the Argos payloads, is innovating and launching the Argos ANGELS project (Argos NEO Generic Economic Light Satellites). ANGELS will lead to a nanosatellite prototype with an Argos NEO instrument (30 cm x 30 cm x 20cm) that will be launched in 2019. In the meantime, the design of the renewal of the Argos constellation, called Argos For Next Generations (Argos4NG), is on track and will be operational in 2022. Based on Argos 4 and benefitting of the feedback from ANGELS project, this constellation will allow revisiting time of fewer than 20 minutes in average between two satellite passes, and will also bring more frequency bands to improve the overall capacity of the system. The presentation will then be an overview of the Argos system, present and future and new capacities coming with it. On top of that, use cases of two Argos hardware modules will be presented: the goniometer pathfinder allowing recovering Argos beacons at sea or on the ground in a 100 km radius horizon-free circle around the beacon location and the new Argos 4 chipset called ‘Artic’, already available and tested by several manufacturers.Keywords: Argos satellite telemetry, marine protected areas, oceanography, maritime services
Procedia PDF Downloads 1821604 Combained Cultivation of Endemic Strains of Lactic Acid Bacteria and Yeast with Antimicrobial Properties
Authors: A. M. Isakhanyan, F. N. Tkhruni, N. N. Yakimovich, Z. I. Kuvaeva, T. V. Khachatryan
Abstract:
Introduction: At present, the simbiotics based on different genera and species of lactic acid bacteria (LAB) and yeasts are used. One of the basic properties of probiotics is presence of antimicrobial activity and therefore selection of LAB and yeast strains for their co-cultivation with the aim of increasing of the activity is topical. Since probiotic yeast and bacteria have different mechanisms of action, natural synergies between species, higher viability and increasing of antimicrobial activity might be expected from mixing both types of probiotics. Endemic strains of LAB Enterococcus faecium БТK-64, Lactobaccilus plantarum БТK-66, Pediococcus pentosus БТK-28, Lactobacillus rhamnosus БТK-109 and Kluyveromyces lactis БТX-412, Saccharomycopsis sp. БТX- 151 strains of yeast, with probiotic properties and hight antimicrobial activity, were selected. Strains are deposited in "Microbial Depository Center" (MDC) SPC "Armbiotechnology". Methods: LAB and yeast strains were isolated from different dairy products from rural households of Armenia. The genotyping by 16S rRNA sequencing for LAB and 26S RNA sequencing for yeast were used. Combined cultivation of LAB and yeast strains was carried out in the nutrient media on the basis of milk whey, in anaerobic conditions (without shaker, in a thermostat at 37oC, 48 hours). The complex preparations were obtained by purification of cell free culture broth (CFC) broth by the combination of ion-exchange chromatography and gel filtration methods. The spot-on-lawn method was applied for determination of antimicrobial activity and expressed in arbitrary units (AU/ml). Results. The obtained data showed that at the combined growth of bacteria and yeasts, the cultivation conditions (medium composition, time of growth, genera of LAB and yeasts) affected the display of antimicrobial activity. Purification of CFC broth allowed obtaining partially purified antimicrobial complex preparation which contains metabiotics from both bacteria and yeast. The complex preparation inhibited the growth of pathogenic and conditionally pathogenic bacteria, isolated from various internal organs from diseased animals and poultry with greater efficiency than the preparations derived individually alone from yeast and LAB strains. Discussion. Thus, our data shown perspectives of creation of a new class of antimicrobial preparations on the basis of combined cultivation of endemic strains of LAB and yeast. Obtained results suggest the prospect of use of the partially purified complex preparations instead antibiotics in the agriculture and for food safety. Acknowledgments: This work was supported by the RA MES State Committee of Science and Belarus National Foundation for Basic Research in the frames of the joint Armenian - Belarusian joint research project 13РБ-064.Keywords: co-cultivation, antimicrobial activity, biosafety, metabiotics, lactic acid bacteria, yeast
Procedia PDF Downloads 3391603 Social Value of Travel Time Savings in Sub-Saharan Africa
Authors: Richard Sogah
Abstract:
The significance of transport infrastructure investments for economic growth and development has been central to the World Bank’s strategy for poverty reduction. Among the conventional surface transport infrastructures, road infrastructure is significant in facilitating the movement of human capital goods and services. When transport projects (i.e., roads, super-highways) are implemented, they come along with some negative social values (costs), such as increased noise and air pollution for local residents living near these facilities, displaced individuals, etc. However, these projects also facilitate better utilization of existing capital stock and generate other observable benefits that can be easily quantified. For example, the improvement or construction of roads creates employment, stimulates revenue generation (toll), reduces vehicle operating costs and accidents, increases accessibility, trade expansion, safety improvement, etc. Aside from these benefits, travel time savings (TTSs) which are the major economic benefits of urban and inter-urban transport projects and therefore integral in the economic assessment of transport projects, are often overlooked and omitted when estimating the benefits of transport projects, especially in developing countries. The absence of current and reliable domestic travel data and the inability of replicated models from the developed world to capture the actual value of travel time savings due to the large unemployment, underemployment, and other labor-induced distortions has contributed to the failure to assign value to travel time savings when estimating the benefits of transport schemes in developing countries. This omission of the value of travel time savings from the benefits of transport projects in developing countries poses problems for investors and stakeholders to either accept or dismiss projects based on schemes that favor reduced vehicular operating costs and other parameters rather than those that ease congestion, increase average speed, facilitate walking and handloading, and thus save travel time. Given the complex reality in the estimation of the value of travel time savings and the presence of widespread informal labour activities in Sub-Saharan Africa, we construct a “nationally ranked distribution of time values” and estimate the value of travel time savings based on the area beneath the distribution. Compared with other approaches, our method captures both formal sector workers and individuals/people who work outside the formal sector and hence changes in their time allocation occur in the informal economy and household production activities. The dataset for the estimations is sourced from the World Bank, the International Labour Organization, etc.Keywords: road infrastructure, transport projects, travel time savings, congestion, Sub-Sahara Africa
Procedia PDF Downloads 1091602 Inequality of Opportunities and Dropping Out of High School: Perspectives for Students from a Public School and a Private School in Brazil
Authors: Joyce Mary Adam
Abstract:
The subject of youth and education has been on the agenda of both public policies and specific education policies. In this sense, this work aims to discuss, based on the conceptions of social capital and cultural capital, the possibilities of elaborating and putting into practice the life projects they build during secondary school. The critical view brought by the concepts of social capital and cultural capital considers that in the school environment, those who have social capital and cultural capital have more tools to continue their projects, while those who do not have such capital will consequently have fewer opportunities, a fact that directly contributes to the perpetuation of social and educational inequality. When the "Life Project" is discussed under the sole responsibility of the students, it is clear that it is the students who must "take their responsibilities and decisions", their success or failure. From this point of view, the success of the implementation of the Life Project is determined by how well the students have developed their "skills and competencies" and their capacity for entrepreneurship without promoting a critical reflection on the real economic difficulties of the majority of students at this level of education. This situation gives rise to feelings of self-blame and self-responsibility among young people, who are compelled to confront the reality that their expectations have not been fulfilled, that they have been unable to gain employment, and, in some instances, that they have been marginalized. In this regard, the research project aimed to gather data on the living conditions of students at a public school and a private school in Brazil through interviews. The research methodology was interviews with students from a public school and an elite private school. The main objective of the research was to analyze the students' cultural and social capital as a key element in their social and professional integration after completing this stage of education. The study showed that social and cultural capital has a significant influence on opportunities to continue studying or to find a satisfactory job. For young people from public schools and from lower economic classes, the need to enter the job market as soon as they finish or even before they finish high school is due to economic and survival issues. The hours of dedication to studies and the diversity of cultural activities such as trips, visits to museums, or the cultivation of artistic activities available to poorer students in state schools have proved to be rarer. In conclusion, we found that the difference in social and cultural capital between the young people taking part in the research has been shown to play an important role in the social and professional integration of the students and contributes to the maintenance of school and social inequality. This highlights the importance of public policies and support networks for young people leaving secondary school.Keywords: social capital, cultural capital, high school, life project, social insertion, professional insertion, youth
Procedia PDF Downloads 261601 Performance Analysis of Double Gate FinFET at Sub-10NM Node
Authors: Suruchi Saini, Hitender Kumar Tyagi
Abstract:
With the rapid progress of the nanotechnology industry, it is becoming increasingly important to have compact semiconductor devices to function and offer the best results at various technology nodes. While performing the scaling of the device, several short-channel effects occur. To minimize these scaling limitations, some device architectures have been developed in the semiconductor industry. FinFET is one of the most promising structures. Also, the double-gate 2D Fin field effect transistor has the benefit of suppressing short channel effects (SCE) and functioning well for less than 14 nm technology nodes. In the present research, the MuGFET simulation tool is used to analyze and explain the electrical behaviour of a double-gate 2D Fin field effect transistor. The drift-diffusion and Poisson equations are solved self-consistently. Various models, such as Fermi-Dirac distribution, bandgap narrowing, carrier scattering, and concentration-dependent mobility models, are used for device simulation. The transfer and output characteristics of the double-gate 2D Fin field effect transistor are determined at 10 nm technology node. The performance parameters are extracted in terms of threshold voltage, trans-conductance, leakage current and current on-off ratio. In this paper, the device performance is analyzed at different structure parameters. The utilization of the Id-Vg curve is a robust technique that holds significant importance in the modeling of transistors, circuit design, optimization of performance, and quality control in electronic devices and integrated circuits for comprehending field-effect transistors. The FinFET structure is optimized to increase the current on-off ratio and transconductance. Through this analysis, the impact of different channel widths, source and drain lengths on the Id-Vg and transconductance is examined. Device performance was affected by the difficulty of maintaining effective gate control over the channel at decreasing feature sizes. For every set of simulations, the device's features are simulated at two different drain voltages, 50 mV and 0.7 V. In low-power and precision applications, the off-state current is a significant factor to consider. Therefore, it is crucial to minimize the off-state current to maximize circuit performance and efficiency. The findings demonstrate that the performance of the current on-off ratio is maximum with the channel width of 3 nm for a gate length of 10 nm, but there is no significant effect of source and drain length on the current on-off ratio. The transconductance value plays a pivotal role in various electronic applications and should be considered carefully. In this research, it is also concluded that the transconductance value of 340 S/m is achieved with the fin width of 3 nm at a gate length of 10 nm and 2380 S/m for the source and drain extension length of 5 nm, respectively.Keywords: current on-off ratio, FinFET, short-channel effects, transconductance
Procedia PDF Downloads 611600 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207