Search results for: Broad Emission
297 Effect of Several Soil Amendments on Water Quality in Mine Soils: Leaching Columns
Authors: Carmela Monterroso, Marc Romero-Estonllo, Carlos Pascual, Beatriz Rodríguez-Garrido
Abstract:
The mobilization of heavy metals from polluted soils causes their transfer to natural waters, with consequences for ecosystems and human health. Phytostabilization techniques are applied to reduce this mobility, through the establishment of a vegetal cover and the application of soil amendments. In this work, the capacity of different organic amendments to improve water quality and reduce the mobility of metals in mine-tailings was evaluated. A field pilot test was carried out with leaching columns installed on an old Cu mine ore (NW of Spain) which forms part of the PhytoSUDOE network of phytomanaged contaminated field sites (PhytoSUDOE/ Phy2SUDOE Projects (SOE1/P5/E0189 and SOE4/P5/E1021)). Ten columns (1 meter high by 25 cm in diameter) were packed with untreated mine tailings (control) or those treated with organic amendments. Applied amendments were based on different combinations of municipal wastes, bark chippings, biomass fly ash, and nanoparticles like aluminum oxides or ferrihydrite-type iron oxides. During the packing of the columns, rhizon-samplers were installed at different heights (10, 20, and 50 cm) from the top, and pore water samples were obtained by suction. Additionally, in each column, a bottom leachate sample was collected through a valve installed at the bottom of the column. After packing, the columns were sown with grasses. Water samples were analyzed for: pH and redox potential, using combined electrodes; salinity by conductivity meter: bicarbonate by titration, sulfate, nitrate, and chloride, by ion chromatography (Dionex 2000); phosphate by colorimetry with ammonium molybdate/ascorbic acid; Ca, Mg, Fe, Al, Mn, Zn, Cu, Cd, and Pb by flame atomic absorption/emission spectrometry (Perkin Elmer). Porewater and leachate from the control columns (packed with unamended mine tailings) were extremely acidic and had a high concentration of Al, Fe, and Cu. In these columns, no plant development was observed. The application of organic amendments improved soil conditions, which allowed the establishment of a dense cover of grasses in the rest of the columns. The combined effect of soil amendment and plant growth had a positive impact on water quality and reduced mobility of aluminum and heavy metals.Keywords: leaching, organic amendments, phytostabilization, polluted soils
Procedia PDF Downloads 110296 Inhibition of Food Borne Pathogens by Bacteriocinogenic Enterococcus Strains
Authors: Neha Farid
Abstract:
Due to the abuse of antimicrobial medications in animal feed, the occurrence of multi-drug resistant (MDR) pathogens in foods is currently a growing public health concern on a global scale. MDR infections have the potential to penetrate the food chain by posing a serious risk to both consumers and animals. Food pathogens are those biological agents that have the tendency to cause pathogenicity in the host body upon ingestion. The major reservoirs of foodborne pathogens include food-producing fauna like cows, pigs, goats, sheep, deer, etc. The intestines of these animals are highly condensed with several different types of food pathogens. Bacterial food pathogens are the main cause of foodborne disease in humans; almost 66% of the reported cases of food illness in a year are caused by the infestation of bacterial food pathogens. When ingested, these pathogens reproduce and survive or form different kinds of toxins inside host cells causing severe infections. The genus Listeria consists of gram-positive, rod-shaped, non-spore-forming bacteria. The disease caused by Listeria monocytogenes is listeriosis or gastroenteritis, which induces fever, vomiting, and severe diarrhea in the affected body. Campylobacter jejuni is a gram-negative, curved-rod-shaped bacteria causing foodborne illness. The major source of Campylobacter jejuni is livestock and poultry; particularly, chicken is highly colonized with Campylobacter jejuni. Serious public health concerns include the widespread growth of bacteria that are resistant to antibiotics and the slowing in the discovery of new classes of medicines. The objective of this study is to provide some potential antibacterial activities with certain broad-range antibiotics and our desired bacteriocins, i.e., Enterococcus faecium from specific strains preventing microbial contamination pathways in order to safeguard the food by lowering food deterioration, contamination, and foodborne illnesses. The food pathogens were isolated from various sources of dairy products and meat samples. The isolates were tested for the presence of Listeria and Campylobacter by gram staining and biochemical testing. They were further sub-cultured on selective media enriched with the growth supplements for Listeria and Campylobacter. All six strains of Listeria and Campylobacter were tested against ten antibiotics. Campylobacter strains showed resistance against all the antibiotics, whereas Listeria was found to be resistant only against Nalidixic Acid and Erythromycin. Further, the strains were tested against the two bacteriocins isolated from Enterococcus faecium. It was found that bacteriocins showed better antimicrobial activity against food pathogens. They can be used as a potential antimicrobial for food preservation. Thus, the study concluded that natural antimicrobials could be used as alternatives to synthetic antimicrobials to overcome the problem of food spoilage and severe food diseases.Keywords: food pathogens, listeria, campylobacter, antibiotics, bacteriocins
Procedia PDF Downloads 72295 Emissions and Total Cost of Ownership Assessment of Hybrid Propulsion Concepts for Bus Transport with Compressed Natural Gases or Diesel Engine
Authors: Volker Landersheim, Daria Manushyna, Thinh Pham, Dai-Duong Tran, Thomas Geury, Omar Hegazy, Steven Wilkins
Abstract:
Air pollution is one of the emerging problems in our society. Targets of reduction of CO₂ emissions address low-carbon and resource-efficient transport. (Plug-in) hybrid electric propulsion concepts offer the possibility to reduce total cost of ownership (TCO) and emissions for public transport vehicles (e.g., bus application). In this context, typically, diesel engines are used to form the hybrid propulsion system of the vehicle. Though the technological development of diesel engines experience major advantages, some challenges such as the high amount of particle emissions remain relevant. Gaseous fuels (i.e., compressed natural gases (CNGs) or liquefied petroleum gases (LPGs) represent an attractive alternative to diesel because of their composition. In the framework of the research project 'Optimised Real-world Cost-Competitive Modular Hybrid Architecture' (ORCA), which was funded by the EU, two different hybrid-electric propulsion concepts have been investigated: one using a diesel engine as internal combustion engine and one using CNG as fuel. The aim of the current study is to analyze specific benefits for the aforementioned hybrid propulsion systems for predefined driving scenarios with regard to emissions and total cost of ownership in bus application. Engine models based on experimental data for diesel and CNG were developed. For the purpose of designing optimal energy management strategies for each propulsion system, maps-driven or quasi-static models for specific engine types are used in the simulation framework. An analogous modelling approach has been chosen to represent emissions. This paper compares the two concepts regarding their CO₂ and NOx emissions. This comparison is performed for relevant bus missions (urban, suburban, with and without zero-emission zone) and with different energy management strategies. In addition to the emissions, also the downsizing potential of the combustion engine has been analysed to minimize the powertrain TCO (pTCO) for plug-in hybrid electric buses. The results of the performed analyses show that the hybrid vehicle concept using the CNG engine shows advantages both with respect to emissions as well as to pTCO. The pTCO is 10% lower, CO₂ emissions are 13% lower, and the NOx emissions are more than 50% lower than with the diesel combustion engine. These results are consistent across all usage profiles under investigation.Keywords: bus transport, emissions, hybrid propulsion, pTCO, CNG
Procedia PDF Downloads 148294 Influence of Atmospheric Circulation Patterns on Dust Pollution Transport during the Harmattan Period over West Africa
Authors: Ayodeji Oluleye
Abstract:
This study used Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI) and reanalysis dataset of thirty years (1983-2012) to investigate the influence of the atmospheric circulation on dust transport during the Harmattan period over WestAfrica using TOMS data. The Harmattan dust mobilization and atmospheric circulation pattern were evaluated using a kernel density estimate which shows the areas where most points are concentrated between the variables. The evolution of the Inter-Tropical Discontinuity (ITD), Sea surface Temperature (SST) over the Gulf of Guinea, and the North Atlantic Oscillation (NAO) index during the Harmattan period (November-March) was also analyzed and graphs of the average ITD positions, SST and the NAO were observed on daily basis. The Pearson moment correlation analysis was also employed to assess the effect of atmospheric circulation on Harmattan dust transport. The results show that the departure (increased) of TOMS AI values from the long-term mean (1.64) occurred from around 21st of December, which signifies the rich dust days during winter period. Strong TOMS AI signal were observed from January to March with the maximum occurring in the latter months (February and March). The inter-annual variability of TOMSAI revealed that the rich dust years were found between 1984-1985, 1987-1988, 1997-1998, 1999-2000, and 2002-2004. Significantly, poor dust year was found between 2005 and 2006 in all the periods. The study has found strong north-easterly (NE) trade winds were over most of the Sahelianregion of West Africa during the winter months with the maximum wind speed reaching 8.61m/s inJanuary.The strength of NE winds determines the extent of dust transport to the coast of Gulf of Guinea during winter. This study has confirmed that the presence of the Harmattan is strongly dependent on theSST over Atlantic Ocean and ITD position. The locus of the average SST and ITD positions over West Africa could be described by polynomial functions. The study concludes that the evolution of near surface wind field at 925 hpa, and the variations of SST and ITD positions are the major large scale atmospheric circulation systems driving the emission, distribution, and transport of Harmattan dust aerosols over West Africa. However, the influence of NAO was shown to have fewer significance effects on the Harmattan dust transport over the region.Keywords: atmospheric circulation, dust aerosols, Harmattan, West Africa
Procedia PDF Downloads 313293 Changing the Landscape of Fungal Genomics: New Trends
Authors: Igor V. Grigoriev
Abstract:
Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics
Procedia PDF Downloads 209292 New Suspension Mechanism for a Formula Car using Camber Thrust
Authors: Shinji Kajiwara
Abstract:
The basic ability of a vehicle is the ability to “run”, “turn” and “stop”. The safeness and comfort during a drive on various road surfaces and speed depends on the performance of these basic abilities of the vehicle. Stability and maneuverability of a vehicle is vital in automotive engineering. Stability of a vehicle is the ability of the vehicle to revert back to a stable state during a drive when faced with crosswind and irregular road conditions. Maneuverability of a vehicle is the ability of the vehicle to change direction during a drive swiftly based on the steering of the driver. The stability and maneuverability of a vehicle can also be defined as the driving stability of the vehicle. Since fossil fueled vehicle is the main type of transportation today, the environmental factor in automotive engineering is also vital. By improving the fuel efficiency of the vehicle, the overall carbon emission will be reduced thus reducing the effect of global warming and greenhouse gas on the Earth. Another main focus of the automotive engineering is the safety performance of the vehicle especially with the worrying increase of vehicle collision every day. With better safety performance on a vehicle, every driver will be more confidence driving every day. Next, let us focus on the “turn” ability of a vehicle. By improving this particular ability of the vehicle, the cornering limit of the vehicle can be improved thus increasing the stability and maneuverability factor. In order to improve the cornering limit of the vehicle, a study to find the balance between the steering systems, the stability of the vehicle, higher lateral acceleration and the cornering limit detection must be conducted. The aim of this research is to study and develop a new suspension system that that will boost the lateral acceleration of the vehicle and ultimately improving the cornering limit of the vehicle. This research will also study environmental factor and the stability factor of the new suspension system. The double wishbone suspension system is widely used in four-wheel vehicle especially for high cornering performance sports car and racing car. The double wishbone designs allow the engineer to carefully control the motion of the wheel by controlling such parameters as camber angle, caster angle, toe pattern, roll center height, scrub radius, scuff and more. The development of the new suspension system will focus on the ability of the new suspension system to optimize the camber control and to improve the camber limit during a cornering motion. The research will be carried out using the CAE analysis tool. Using this analysis tool we will develop a JSAE Formula Machine equipped with the double wishbone system and also the new suspension system and conduct simulation and conduct studies on performance of both suspension systems.Keywords: automobile, camber thrust, cornering force, suspension
Procedia PDF Downloads 323291 Morphology, Qualitative, and Quantitative Elemental Analysis of Pheasant Eggshells in Thailand
Authors: Kalaya Sribuddhachart, Mayuree Pumipaiboon, Mayuva Youngsabanant-Areekijseree
Abstract:
The ultrastructure of 20 species of pheasant eggshells in Thailand, (Simese Fireback, Lophura diardi), (Silver Pheasant, Lophura nycthemera), (Kalij Pheasant, Lophura leucomelanos crawfurdii), (Kalij Pheasant, Lophura leucomelanos lineata), (Red Junglefowl, Gallus gallus spadiceus), (Crested Fireback, Lophura ignita rufa), (Green Peafowl, Pavo muticus), (Indian Peafowl, Pavo cristatus), (Grey Peacock Pheasant, Polyplectron bicalcaratum bicalcaratum), (Lesser Bornean Fireback, Lophura ignita ignita), (Green Junglefowl, Gallus varius), (Hume's Pheasant, Syrmaticus humiae humiae), (Himalayan Monal, Lophophorus impejanus), Golden Pheasant, Chrysolophus pictus, (Ring-Neck Pheasant, Phasianus sp.), (Reeves’s Pheasant, Syrmaticus reevesi), (Polish Chicken, Gallus sp.), (Brahma Chicken, Gallus sp.), (Yellow Golden Pheasant, Chrysolophus pictus luteus), and (Lady Amhersts Pheasant, Chrysolophus amherstiae) were studied by Secondary electron imaging (SEI) and Energy dispersive X-ray analysis (EDX) detectors of scanning electron microscope. Generally, all pheasant eggshells showed 3 layers of cuticle, palisade, and mammillary. The total thickness was ranging from 190.28±5.94-838.96±16.31µm. The palisade layer is the most thickness layer following by mammillary and cuticle layers. The palisade layer in all pheasant eggshells consisted of numerous vesicle holes that were firmly forming as network thorough the layer. The vesicle holes in all pheasant eggshells had difference porosity ranging from 0.44±0.11-0.23±0.05 µm. While the mammillary layer was the most compact layer with a variable shape (broad-base V and U-shape) connect to shell membrane. Elemental analysis by of 20 specie eggshells showed 9 apparent elements including carbon (C), oxygen (O), calcium (Ca), phosphorous (P), sulfur (S), magnesium (Mg), silicon (Si), aluminum (Al), and copper (Cu) at the percentage of 28.90- 8.33%, 60.64-27.61%, 55.30-14.49%, 1.97-0.03%, 0.08-0.03%, 0.50-0.16%, 0.30-0.04%, 0.06-0.02%, and 2.67-1.73%, respectively. It was found that Ca, C, and O showed highest elemental compositions, which essential for pheasant embryonic development, mainly presented as composited structure of calcium carbonate (CaCO3) more than 97%. Meanwhile, Mg, S, Si, Al, and P were major inorganic constituents of the eggshells which directly related to an increase of the shell hardness. Finally, the percentage of heavy metal copper (Cu) has been observed in 4 eggshell species. There are Golden Pheasant (2.67±0.16%), Indian Peafowl (2.61±0.13%), Green Peafowl (1.97±0.74%), and Silver Pheasant (1.73±0.11%), respectively. A non-significant difference was found in the percentages of 9 elements in all pheasant eggshells. This study is useful to provide the information of biology and taxonomic of pheasant study in Thailand for conservation.Keywords: pheasants eggshells, secondary electron imaging (SEI) and energy dispersive X-ray analysis (EDX), morphology, Thailand
Procedia PDF Downloads 235290 The Lonely Entrepreneur: Antecedents and Effects of Social Isolation on Entrepreneurial Intention and Output
Authors: Susie Pryor, Palak Sadhwani
Abstract:
The purpose of this research is to provide the foundations for a broad research agenda examining the role loneliness plays in entrepreneurship. While qualitative research in entrepreneurship incidentally captures the existence of loneliness as a part of the lived reality of entrepreneurs, to the authors’ knowledge, no academic work has to date explored this construct in this context. Moreover, many individuals reporting high levels of loneliness (women, ethnic minorities, immigrants, low income, low education) reflect those who are currently driving small business growth in the United States. Loneliness is a persistent state of emotional distress which results from feelings of estrangement and rejection or develops in the absence of social relationships and interactions. Empirical work finds links between loneliness and depression, suicide and suicide ideation, anxiety, hostility and passiveness, lack of communication and adaptability, shyness, poor social skills and unrealistic social perceptions, self-doubts, fear of rejection, and negative self-evaluation. Lonely individuals have been found to exhibit lower levels of self-esteem, higher levels of introversion, lower affiliative tendencies, less assertiveness, higher sensitivity to rejection, a heightened external locus of control, intensified feelings of regret and guilt over past events and rigid and overly idealistic goals concerning the future. These characteristics are likely to impact entrepreneurs and their work. Research identifies some key dangers of loneliness. Loneliness damages human love and intimacy, can disturb and distract individuals from channeling creative and effective energies in a meaningful way, may result in the formation of premature, poorly thought out and at times even irresponsible decisions, and produce hard and desensitized individuals, with compromised health and quality of life concerns. The current study utilizes meta-analysis and text analytics to distinguish loneliness from other related constructs (e.g., social isolation) and categorize antecedents and effects of loneliness across subpopulations. This work has the potential to materially contribute to the field of entrepreneurship by cleanly defining constructs and providing foundational background for future research. It offers a richer understanding of the evolution of loneliness and related constructs over the life cycle of entrepreneurial start-up and development. Further, it suggests preliminary avenues for exploration and methods of discovery that will result in knowledge useful to the field of entrepreneurship. It is useful to both entrepreneurs and those work with them as well as academics interested in the topics of loneliness and entrepreneurship. It adopts a grounded theory approach.Keywords: entrepreneurship, grounded theory, loneliness, meta-analysis
Procedia PDF Downloads 112289 Chemistry and Biological Activity of Feed Additive for Poultry Farming
Authors: Malkhaz Jokhadze, Vakhtang Mshvildadze, Levan Makaradze, Ekaterine Mosidze, Salome Barbaqadze, Mariam Murtazashvili, Dali Berashvili, Koba sivsivadze, Lasha Bakuridze, Aliosha Bakuridze
Abstract:
Essential oils are one of the most important groups of biologically active substances present in plants. Due to the chemical diversity of components, essential oils and their preparations have a wide spectrum of pharmacological action. They have bactericidal, antiviral, fungicidal, antiprotozoal, anti-inflammatory, spasmolytic, sedative and other activities. They are expectorant, spasmolytic, sedative, hypotensive, secretion enhancing, antioxidant remedies. Based on preliminary pharmacological studies, we have developed a formulation called “Phytobiotic” containing essential oils, a feed additive for poultry as an alternative to antibiotics. Phytobiotic is a water-soluble powder containing a composition of essential oils of thyme, clary, monarda and auxiliary substances: dry extract of liquorice and inhalation lactose. On this stage of research, the goal was to study the chemical composition of provided phytobiotic, identify the main substances and determine their quantity, investigate the biological activity of phytobiotic through in vitro and in vivo studies. Using gas chromatography-mass spectrometry, 38 components were identified in phytobiotic, representing acyclic-, monocyclic-, bicyclic-, and sesquiterpenes. Together with identification of main active substances, their quantitative content was determined, including acyclic terpene alcohol β-linalool, acyclic terpene ketone linalyl acetate, monocyclic terpenes: D-limonene and γ-terpinene, monocyclic aromatic terpene thymol. Provided phytobiotic has pronounced and at the same time broad spectrum of antibacterial activity. In the cell model, phytobiotic showed weak antioxidant activity, and it was stronger in the ORAC (chemical model) tests. Meanwhile anti-inflammatory activity was also observed. When fowls were supplied feed enriched with phytobiotic, it was observed that gained weight of the chickens in the experimental group exceeded the same data for the control group during the entire period of the experiment. The survival rate of broilers in the experimental group during the growth period was 98% compared to -94% in the control group. As a result of conducted researches probable four different mechanisms which are important for the action of phytobiotics were identified: sensory, metabolic, antioxidant and antibacterial action. General toxic, possible local irritant and allergenic effects of phytobiotic were also investigated. Performed assays proved that formulation is safe.Keywords: clary, essential oils, monarda, poultry, phytobiotics, thyme
Procedia PDF Downloads 173288 Identifying the Effects of the Rural Demographic Changes in the Northern Netherlands: A Holistic Approach to Create Healthier Environment
Authors: A. R. Shokoohi, E. A. M. Bulder, C. Th. van Alphen, D. F. den Hertog, E. J. Hin
Abstract:
The Northern region of the Netherlands has beautiful landscapes, a nice diversity of green and blue areas, and dispersed settlements. However, some recent population changes can become threats to health and wellbeing in these areas. The rural areas in the three northern provinces -Groningen, Friesland, and Drenthe, see youngsters leave the region for which reason they are aging faster than other regions in the Netherlands. As a result, some villages have faced major population decline that is leading to loss of facilities/amenities and a decrease in accessibility and social cohesion. Those who still live in these villages are relatively old, low educated and have low-income. To develop a deeper understanding of the health status of the people living in these areas, and help them to improve their living environment, the GO!-Method is being applied in this study. This method has been developed by the National Institute for Public Health and the Environment (RIVM) of the Netherlands and is inspired by the broad definition of health by Machteld Huber: the ability to adapt and direct control, in terms of the physical, emotional and social challenges of life, while paying extra attention to vulnerable groups. A healthy living environment is defined as an environment that residents find it pleasant and encourages and supports healthy behavior. The GO!-method integrates six domains that constitute a healthy living environment: health and lifestyle, facilities and development, safety and hygiene, social cohesion and active citizens, green areas, and air and noise pollution. First of all, this method will identify opportunities for a healthier living environment using existing information and perceptions of residents and other local stakeholders in order to strengthen social participation and quality of life in these rural areas. Second, this approach will connect identified opportunities with available and effective evidence-based interventions in order to develop an action plan from the residents and local authorities perspective which will help them to design their municipalities healthier and more resilient. This method is being used for the first time in rural areas to our best knowledge, in close collaboration with the residents and local authorities of the three provinces to create a sustainable process and stimulate social participation. Our paper will present the outcomes of the first phase of this project in collaboration with the municipality of Westerkwartier, located in the northwest of the province of Groningen. And will describe the current situation, and identify local assets, opportunities, and policies relating to healthier environment; as well as needs and challenges to achieve goals. The preliminary results show that rural demographic changes in the northern Netherlands have negative impacts on service provisions and social cohesion, and there is a need to understand this complicated situation and improve the quality of life in those areas.Keywords: population decline, rural areas, healthy environment, Netherlands
Procedia PDF Downloads 98287 Feasibility of Small Autonomous Solar-Powered Water Desalination Units for Arid Regions
Authors: Mohamed Ahmed M. Azab
Abstract:
The shortage of fresh water is a major problem in several areas of the world such as arid regions and coastal zones in several countries of Arabian Gulf. Fortunately, arid regions are exposed to high levels of solar irradiation most the year, which makes the utilization of solar energy a promising solution to such problem with zero harmful emission (Green System). The main objective of this work is to conduct a feasibility study of utilizing small autonomous water desalination units powered by photovoltaic modules as a green renewable energy resource to be employed in different isolated zones as a source of drinking water for some scattered societies where the installation of huge desalination stations are discarded owing to the unavailability of electric grid. Yanbu City is chosen as a case study where the Renewable Energy Center exists and equipped with all sensors to assess the availability of solar energy all over the year. The study included two types of available water: the first type is brackish well water and the second type is seawater of coastal regions. In the case of well water, two versions of desalination units are involved in the study: the first version is based on day operation only. While the second version takes into consideration night operation also, which requires energy storage system as batteries to provide the necessary electric power at night. According to the feasibility study results, it is found that utilization of small autonomous desalinations unit is applicable and economically accepted in the case of brackish well water. While in the case of seawater the capital costs are extremely high and the cost of desalinated water will not be economically feasible unless governmental subsidies are provided. In addition, the study indicated that, for the same water production, the utilization of energy storage version (day-night) adds additional capital cost for batteries, and extra running cost for their replacement, which makes the unit price not only incompetent with day-only unit but also with conventional units powered by diesel generator (fossil fuel) owing to the low prices of fuel in the kingdom. However, the cost analysis shows that the price of the produced water per cubic meter of day-night unit is similar to that produced from the day-only unit provided that the day-night unit operates theoretically for a longer period of 50%.Keywords: solar energy, water desalination, reverse osmosis, arid regions
Procedia PDF Downloads 456286 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 104285 Impact of Chess Intervention on Cognitive Functioning of Children
Authors: Ebenezer Joseph
Abstract:
Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.Keywords: chess, intelligence, creativity, children
Procedia PDF Downloads 258284 Remote Sensing-Based Prediction of Asymptomatic Rice Blast Disease Using Hyperspectral Spectroradiometry and Spectral Sensitivity Analysis
Authors: Selvaprakash Ramalingam, Rabi N. Sahoo, Dharmendra Saraswat, A. Kumar, Rajeev Ranjan, Joydeep Mukerjee, Viswanathan Chinnasamy, K. K. Chaturvedi, Sanjeev Kumar
Abstract:
Rice is one of the most important staple food crops in the world. Among the various diseases that affect rice crops, rice blast is particularly significant, causing crop yield and economic losses. While the plant has defense mechanisms in place, such as chemical indicators (proteins, salicylic acid, jasmonic acid, ethylene, and azelaic acid) and resistance genes in certain varieties that can protect against diseases, susceptible varieties remain vulnerable to these fungal diseases. Early prediction of rice blast (RB) disease is crucial, but conventional techniques for early prediction are time-consuming and labor-intensive. Hyperspectral remote sensing techniques hold the potential to predict RB disease at its asymptomatic stage. In this study, we aimed to demonstrate the prediction of RB disease at the asymptomatic stage using non-imaging hyperspectral ASD spectroradiometer under controlled laboratory conditions. We applied statistical spectral discrimination theory to identify unknown spectra of M. Oryzae, the fungus responsible for rice blast disease. The infrared (IR) region was found to be significantly affected by RB disease. These changes may result in alterations in the absorption, reflection, or emission of infrared radiation by the affected plant tissues. Our research revealed that the protein spectrum in the IR region is impacted by RB disease. In our study, we identified strong correlations in the region (Amide group - I) around X 1064 nm and Y 1300 nm with the Lambda / Lambda derived spectra methods for protein detection. During the stages when the disease is developing, typically from day 3 to day 5, the plant's defense mechanisms are not as effective. This is especially true for the PB-1 variety of rice, which is highly susceptible to rice blast disease. Consequently, the proteins in the plant are adversely affected during this critical time. The spectral contour plot reveals the highly correlated spectral regions 1064 nm and Y 1300 nm associated with RB disease infection. Based on these spectral sensitivities, we developed new spectral disease indices for predicting different stages of disease emergence. The goal of this research is to lay the foundation for future UAV and satellite-based studies aimed at long-term monitoring of RB disease.Keywords: rice blast, asymptomatic stage, spectral sensitivity, IR
Procedia PDF Downloads 87283 Mapping of Renovation Potential in Rudersdal Municipality Based on a Sustainability Indicator Framework
Authors: Barbara Eschen Danielsen, Morten Niels Baxter, Per Sieverts Nielsen
Abstract:
Europe is currently in an energy and climate crisis, which requires more sustainable solutions than what has been used to before. Europe uses 40% of its energy in buildings so there has come a significant focus on trying to find and commit to new initiatives to reduce energy consumption in buildings. The European Union has introduced a building standard in 2021 to be upheld by 2030. This new building standard requires a significant reduction of CO2 emissions from both privately and publicly owned buildings. The overall aim is to achieve a zero-emission building stock by 2050. EU is revising the Energy Performance of Buildings Directive (EPBD) as part of the “Fit for 55” package. It was adopted on March 14, 2023. The new directive’s main goal is to renovate the least energy-efficient homes in Europe. There will be a cost for the home owner with a renovation project, but there will also be an improvement in energy efficiency and, therefore, a cost reduction. After the implementation of the EU directive, many homeowners will have to focus their attention on how to make the most effective energy renovations of their homes. The new EU directive will affect almost one million Danish homes (30%), as they do not meet the newly implemented requirements for energy efficiency. The problem for this one mio homeowners is that it is not easy to decide which renovation project they should consider. The houses are build differently and there are many possible solutions. The main focus of this paper is to identify the most impactful solutions and evaluate their impact and evaluating them with a criteria based sustainability indicator framework. The result of the analysis give each homeowner an insight in the various renovation options, including both advantages and disadvantages with the aim of avoiding unnecessary costs and errors while minimizing their CO2 footprint. Given that the new EU directive impacts a significant number of home owners and their homes both in Denmark and the rest of the European Union it is crucial to clarify which renovations have the most environmental impact and most cost effective. We have evaluated the 10 most impactful solutions and evaluated their impact in an indicator framework which includes 9 indicators and covers economic, environmental as well as social factors. We have packaged the result of the analysis in three packages, the most cost effective (short term), the most cost effective (long-term) and the most sustainable. The results of the study secure transparency and thereby provides homeowners with a tool to help their decision-making. The analysis is based on mostly qualitative indicators, but it will be possible to evaluate most of the indicators quantitively in a future study.Keywords: energy efficiency, building renovation, renovation solutions, building energy performance criteria
Procedia PDF Downloads 90282 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve
Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick
Abstract:
Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin
Procedia PDF Downloads 152281 Vapour Liquid Equilibrium Measurement of CO₂ Absorption in Aqueous 2-Aminoethylpiperazine (AEP)
Authors: Anirban Dey, Sukanta Kumar Dash, Bishnupada Mandal
Abstract:
Carbondioxide (CO2) is a major greenhouse gas responsible for global warming and fossil fuel power plants are the main emitting sources. Therefore the capture of CO2 is essential to maintain the emission levels according to the standards. Carbon capture and storage (CCS) is considered as an important option for stabilization of atmospheric greenhouse gases and minimizing global warming effects. There are three approaches towards CCS: Pre combustion capture where carbon is removed from the fuel prior to combustion, Oxy-fuel combustion, where coal is combusted with oxygen instead of air and Post combustion capture where the fossil fuel is combusted to produce energy and CO2 is removed from the flue gases left after the combustion process. Post combustion technology offers some advantage as existing combustion technologies can still be used without adopting major changes on them. A number of separation processes could be utilized part of post –combustion capture technology. These include (a) Physical absorption (b) Chemical absorption (c) Membrane separation (d) Adsorption. Chemical absorption is one of the most extensively used technologies for large scale CO2 capture systems. The industrially important solvents used are primary amines like Monoethanolamine (MEA) and Diglycolamine (DGA), secondary amines like diethanolamine (DEA) and Diisopropanolamine (DIPA) and tertiary amines like methyldiethanolamine (MDEA) and Triethanolamine (TEA). Primary and secondary amines react fast and directly with CO2 to form stable carbamates while Tertiary amines do not react directly with CO2 as in aqueous solution they catalyzes the hydrolysis of CO2 to form a bicarbonate ion and a protonated amine. Concentrated Piperazine (PZ) has been proposed as a better solvent as well as activator for CO2 capture from flue gas with a 10 % energy benefit compared to conventional amines such as MEA. However, the application of concentrated PZ is limited due to its low solubility in water at low temperature and lean CO2 loading. So following the performance of PZ its derivative 2-Aminoethyl piperazine (AEP) which is a cyclic amine can be explored as an activator towards the absorption of CO2. Vapour liquid equilibrium (VLE) in CO2 capture systems is an important factor for the design of separation equipment and gas treating processes. For proper thermodynamic modeling accurate equilibrium data for the solvent system over a wide range of temperatures, pressure and composition is essential. The present work focuses on the determination of VLE data for (AEP + H2O) system at 40 °C for various composition range.Keywords: absorption, aminoethyl piperazine, carbondioxide, vapour liquid equilibrium
Procedia PDF Downloads 269280 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control
Authors: Mohamed Elmeshad
Abstract:
During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.Keywords: business, mass media, political economy, power, privatisation
Procedia PDF Downloads 228279 An Audit of Climate Change and Sustainability Teaching in Medical School
Authors: Karolina Wieczorek, Zofia Przypaśniak
Abstract:
Climate change is a rapidly growing threat to global health, and part of the responsibility to combat it lies within the healthcare sector itself, including adequate education of future medical professionals. To mitigate the consequences, the General Medical Council (GMC) has equipped medical schools with a list of outcomes regarding sustainability teaching. Students are expected to analyze the impact of the healthcare sector’s emissions on climate change. The delivery of the related teaching content is, however, often inadequate and insufficient time is devoted for exploration of the topics. Teaching curricula lack in-depth exploration of the learning objectives. This study aims to assess the extent and characteristics of climate change and sustainability subjects teaching in the curriculum of a chosen UK medical school (Barts and The London School of Medicine and Dentistry). It compares the data to the national average scores from the Climate Change and Sustainability Teaching (C.A.S.T.) in Medical Education Audit to draw conclusions about teaching on a regional level. This is a single-center audit of the timetabled sessions of teaching in the medical course. The study looked at the academic year 2020/2021 which included a review of all non-elective, core curriculum teaching materials including tutorials, lectures, written resources, and assignments in all five years of the undergraduate and graduate degrees, focusing only on mandatory teaching attended by all students (excluding elective modules). The topics covered were crosschecked with GMC Outcomes for graduates: “Educating for Sustainable Healthcare – Priority Learning Outcomes” as gold standard to look for coverage of the outcomes and gaps in teaching. Quantitative data was collected in form of time allocated for teaching as proxy of time spent per individual outcomes. The data was collected independently by two students (KW and ZP) who have received prior training and assessed two separate data sets to increase interrater reliability. In terms of coverage of learning outcomes, 12 out of 13 were taught (with the national average being 9.7). The school ranked sixth in the UK for time spent per topic and second in terms of overall coverage, meaning the school has a broad range of topics taught with some being explored in more detail than others. For the first outcome 4 out of 4 objectives covered (average 3.5) with 47 minutes spent per outcome (average 84 min), for the second objective 5 out of 5 covered (average 3.5) with 46 minutes spent (average 20), for the third 3 out of 4 (average 2.5) with 10 mins pent (average 19 min). A disproportionately large amount of time is spent delivering teaching regarding air pollution (respiratory illnesses), which resulted in the topic of sustainability in other specialties being excluded from teaching (musculoskeletal, ophthalmology, pediatrics, renal). Conclusions: Currently, there is no coherent strategy on national teaching of climate change topics and as a result an unstandardized amount of time spent on teaching and coverage of objectives can be observed.Keywords: audit, climate change, sustainability, education
Procedia PDF Downloads 87278 Control of Doxorubicin Release Rate from Magnetic PLGA Nanoparticles Using a Non-Permanent Magnetic Field
Authors: Inês N. Peça , A. Bicho, Rui Gardner, M. Margarida Cardoso
Abstract:
Inorganic/organic nanocomplexes offer tremendous scope for future biomedical applications, including imaging, disease diagnosis and drug delivery. The combination of Fe3O4 with biocompatible polymers to produce smart drug delivery systems for use in pharmaceutical formulation present a powerful tool to target anti-cancer drugs to specific tumor sites through the application of an external magnetic field. In the present study, we focused on the evaluation of the effect of the magnetic field application time on the rate of drug release from iron oxide polymeric nanoparticles. Doxorubicin, an anticancer drug, was selected as the model drug loaded into the nanoparticles. Nanoparticles composed of poly(d-lactide-co-glycolide (PLGA), a biocompatible polymer already approved by FDA, containing iron oxide nanoparticles (MNP) for magnetic targeting and doxorubicin (DOX) were synthesized by the o/w solvent extraction/evaporation method and characterized by scanning electron microscopy (SEM), by dynamic light scattering (DLS), by inductively coupled plasma-atomic emission spectrometry and by Fourier transformed infrared spectroscopy. The produced particles yielded smooth surfaces and spherical shapes exhibiting a size between 400 and 600 nm. The effect of the magnetic doxorubicin loaded PLGA nanoparticles produced on cell viability was investigated in mammalian CHO cell cultures. The results showed that unloaded magnetic PLGA nanoparticles were nontoxic while the magnetic particles without polymeric coating show a high level of toxicity. Concerning the therapeutic activity doxorubicin loaded magnetic particles cause a remarkable enhancement of the cell inhibition rates compared to their non-magnetic counterpart. In vitro drug release studies performed under a non-permanent magnetic field show that the application time and the on/off cycle duration have a great influence with respect to the final amount and to the rate of drug release. In order to determine the mechanism of drug release, the data obtained from the release curves were fitted to the semi-empirical equation of the the Korsmeyer-Peppas model that may be used to describe the Fickian and non-Fickian release behaviour. Doxorubicin release mechanism has shown to be governed mainly by Fickian diffusion. The results obtained show that the rate of drug release from the produced magnetic nanoparticles can be modulated through the magnetic field time application.Keywords: drug delivery, magnetic nanoparticles, PLGA nanoparticles, controlled release rate
Procedia PDF Downloads 261277 Make Populism Great Again: Identity Crisis in Western World with a Narrative Analysis of Donald Trump's Presidential Campaign Announcement Speech
Authors: Soumi Banerjee
Abstract:
In this research paper we will go deep into understanding Benedict Anderson’s definition of the nation as an imagined community and we will analyze why and how national identities were created through long and complex processes, and how there can exist strong emotional bonds between people within an imagined community, given the fact that these people have never known each other personally, but will still feel some form of imagined unity. Such identity construction on the part of an individual or within societies are always in some sense in a state of flux as imagined communities are ever changing, which provides us with the ontological foundation for reaching on this paper. This sort of identity crisis among individuals living in the Western world, who are in search for psychological comfort and security, illustrates a possible need for spatially dislocated, ontologically insecure and vulnerable individuals to have a secure identity. To create such an identity there has to be something to build upon, which could be achieved through what may be termed as ‘homesteading’. This could in short, and in my interpretation of Kinnvall and Nesbitt’s concept, be described as a search for security that involves a search for ‘home’, where home acts as a secure place, which one can build an identity around. The next half of the paper will then look into how populism and identity have played an increasingly important role in the political elections in the so-called western democracies of the world, using the U.S. as an example. Notions of ‘us and them’, the people and the elites will be looked into and analyzed through a social constructivist theoretical lens. Here we will analyze how such narratives about identity and the nation state affects people, their personality development and identity in different ways by studying the U.S. President Donald Trump’s speeches and analyze if and how he used different identity creating narratives for gaining political and popular support. The reason to choose narrative analysis as a method in this research paper is to use the narratives as a device to understand how the perceived notions of 'us and them' can initiate huge identity crisis with a community or a nation-state. This is a relevant subject as results and developments such as rising populist rightwing movements are being felt in a number of European states, with the so-called Brexit vote in the U.K. and the election of Donald Trump as president are two of the prime examples. This paper will then attempt to argue that these mechanisms are strengthened and gaining significance in situations when humans in an economic, social or ontologically vulnerable position, imagined or otherwise, in a general and broad meaning perceive themselves to be under pressure, and a sense of insecurity is rising. These insecurities and sense of being under threat have been on the rise in many of the Western states that are otherwise usually perceived to be some of the safest, democratically stable and prosperous states in the world, which makes it of interest to study what has changed, and help provide some part of the explanation as to how creating a ‘them’ in the discourse of national identity can cause massive security crisis.Keywords: identity crisis, migration, ontological security(in), nation-states
Procedia PDF Downloads 256276 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis
Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin
Abstract:
With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism
Procedia PDF Downloads 266275 Cytotoxicity and Genotoxicity of Glyphosate and Its Two Impurities in Human Peripheral Blood Mononuclear Cells
Authors: Marta Kwiatkowska, Paweł Jarosiewicz, Bożena Bukowska
Abstract:
Glyphosate (N-phosphonomethylglycine) is a non-selected broad spectrum ingredient in the herbicide (Roundup) used for over 35 years for the protection of agricultural and horticultural crops. Glyphosate was believed to be environmentally friendly but recently, a large body of evidence has revealed that glyphosate can negatively affect on environment and humans. It has been found that glyphosate is present in the soil and groundwater. It can also enter human body which results in its occurrence in blood in low concentrations of 73.6 ± 28.2 ng/ml. Research conducted for potential genotoxicity and cytotoxicity can be an important element in determining the toxic effect of glyphosate. Due to regulation of European Parliament 1107/2009 it is important to assess genotoxicity and cytotoxicity not only for the parent substance but also its impurities, which are formed at different stages of production of major substance – glyphosate. Moreover verifying, which of these compounds are more toxic is required. Understanding of the molecular pathways of action is extremely important in the context of the environmental risk assessment. In 2002, the European Union has decided that glyphosate is not genotoxic. Unfortunately, recently performed studies around the world achieved results which contest decision taken by the committee of the European Union. World Health Organization (WHO) in March 2015 has decided to change the classification of glyphosate to category 2A, which means that the compound is considered to "probably carcinogenic to humans". This category relates to compounds for which there is limited evidence of carcinogenicity to humans and sufficient evidence of carcinogenicity on experimental animals. That is why we have investigated genotoxicity and cytotoxicity effects of the most commonly used pesticide: glyphosate and its impurities: N-(phosphonomethyl)iminodiacetic acid (PMIDA) and bis-(phosphonomethyl)amine on human peripheral blood mononuclear cells (PBMCs), mostly lymphocytes. DNA damage (analysis of DNA strand-breaks) using the single cell gel electrophoresis (comet assay) and ATP level were assessed. Cells were incubated with glyphosate and its impurities: PMIDA and bis-(phosphonomethyl)amine at concentrations from 0.01 to 10 mM for 24 hours. Evaluating genotoxicity using the comet assay showed a concentration-dependent increase in DNA damage for all compounds studied. ATP level was decreased to zero as a result of using the highest concentration of two investigated impurities, like bis-(phosphonomethyl)amine and PMIDA. Changes were observed using the highest concentration at which a person can be exposed as a result of acute intoxication. Our survey leads to a conclusion that the investigated compounds exhibited genotoxic and cytotoxic potential but only in high concentrations, to which people are not exposed environmentally. Acknowledgments: This work was supported by the Polish National Science Centre (Contract-2013/11/N/NZ7/00371), MSc Marta Kwiatkowska, project manager.Keywords: cell viability, DNA damage, glyphosate, impurities, peripheral blood mononuclear cells
Procedia PDF Downloads 482274 New Derivatives 7-(diethylamino)quinolin-2-(1H)-one Based Chalcone Colorimetric Probes for Detection of Bisulfite Anion in Cationic Micellar Media
Authors: Guillermo E. Quintero, Edwin G. Perez, Oriel Sanchez, Christian Espinosa-Bustos, Denis Fuentealba, Margarita E. Aliaga
Abstract:
Bisulfite ion (HSO3-) has been used as a preservative in food, drinks, and medication. However, it is well-known that HSO3- can cause health problems like asthma and allergic reactions in people. Due to the above, the development of analytical methods for detecting this ion has gained great interest. In line with the above, the current use of colorimetric and/or fluorescent probes as a detection technique has acquired great relevance due to their high sensitivity and accuracy. In this context, 2-quinolinone derivatives have been found to possess promising activity as antiviral agents, sensitizers in solar cells, antifungals, antioxidants, and sensors. In particular, 7-(diethylamino)-2-quinolinone derivatives have attracted attention in recent years since their suitable photophysical properties become promising fluorescent probes. In Addition, there is evidence that photophysical properties and reactivity can be affected by the study medium, such as micellar media. Based on the above background, 7-(diethylamino)-2-quinolinone derivatives based chalcone will be able to be incorporated into a cationic micellar environment (Cetyltrimethylammonium bromide, CTAB). Furthermore, the supramolecular control induced by the micellar environment will increase the reactivity of these derivatives towards nucleophilic analytes such as HSO3- (Michael-type addition reaction), leading to the generation of new colorimetric and/or fluorescent probes. In the present study, two derivatives of 7-(diethylamino)-2-quinolinone based chalcone DQD1-2 were synthesized according to the method reported by the literature. These derivatives were structurally characterized by 1H, 13C NMR, and HRMS-ESI. In addition, UV-VIS and fluorescence studies determined absorption bands near 450 nm, emission bands near 600 nm, fluorescence quantum yields near 0.01, and fluorescence lifetimes of 5 ps. In line with the foregoing, these photophysical properties aforementioned were improved in the presence of a cationic micellar medium using CTAB thanks to the formation of adducts presenting association constants of the order of 2,5x105 M-1, increasing the quantum yields to 0.12 and the fluorescence lifetimes corresponding to two lifetimes near to 120 and 400 ps for DQD1 and DQD2. Besides, thanks to the presence of the micellar medium, the reactivity of these derivatives with nucleophilic analytes, such as HSO3-, was increased. This was achieved through kinetic studies, which demonstrated an increase in the bimolecular rate constants in the presence of a micellar medium. Finally, probe DQD1 was chosen as the best sensor since it was assessed to detect HSO3- with excellent results.Keywords: bisulfite detection, cationic micelle, colorimetric probes, quinolinone derivatives
Procedia PDF Downloads 94273 Nondestructive Inspection of Reagents under High Attenuated Cardboard Box Using Injection-Seeded THz-Wave Parametric Generator
Authors: Shin Yoneda, Mikiya Kato, Kosuke Murate, Kodo Kawase
Abstract:
In recent years, there have been numerous attempts to smuggle narcotic drugs and chemicals by concealing them in international mail. Combatting this requires a non-destructive technique that can identify such illicit substances in mail. Terahertz (THz) waves can pass through a wide variety of materials, and many chemicals show specific frequency-dependent absorption, known as a spectral fingerprint, in the THz range. Therefore, it is reasonable to investigate non-destructive mail inspection techniques that use THz waves. For this reason, in this work, we tried to identify reagents under high attenuation shielding materials using injection-seeded THz-wave parametric generator (is-TPG). Our THz spectroscopic imaging system using is-TPG consisted of two non-linear crystals for emission and detection of THz waves. A micro-chip Nd:YAG laser and a continuous wave tunable external cavity diode laser were used as the pump and seed source, respectively. The pump beam and seed beam were injected to the LiNbO₃ crystal satisfying the noncollinear phase matching condition in order to generate high power THz-wave. The emitted THz wave was irradiated to the sample which was raster scanned by the x-z stage while changing the frequencies, and we obtained multispectral images. Then the transmitted THz wave was focused onto another crystal for detection and up-converted to the near infrared detection beam based on nonlinear optical parametric effects, wherein the detection beam intensity was measured using an infrared pyroelectric detector. It was difficult to identify reagents in a cardboard box because of high noise levels. In this work, we introduce improvements for noise reduction and image clarification, and the intensity of the near infrared detection beam was converted correctly to the intensity of the THz wave. A Gaussian spatial filter is also introduced for a clearer THz image. Through these improvements, we succeeded in identification of reagents hidden in a 42-mm thick cardboard box filled with several obstacles, which attenuate 56 dB at 1.3 THz, by improving analysis methods. Using this system, THz spectroscopic imaging was possible for saccharides and may also be applied to cases where illicit drugs are hidden in the box, and multiple reagents are mixed together. Moreover, THz spectroscopic imaging can be achieved through even thicker obstacles by introducing an NIR detector with higher sensitivity.Keywords: nondestructive inspection, principal component analysis, terahertz parametric source, THz spectroscopic imaging
Procedia PDF Downloads 178272 Graphene-Graphene Oxide Dopping Effect on the Mechanical Properties of Polyamide Composites
Authors: Daniel Sava, Dragos Gudovan, Iulia Alexandra Gudovan, Ioana Ardelean, Maria Sonmez, Denisa Ficai, Laurentia Alexandrescu, Ecaterina Andronescu
Abstract:
Graphene and graphene oxide have been intensively studied due to the very good properties, which are intrinsic to the material or come from the easy doping of those with other functional groups. Graphene and graphene oxide have known a broad band of useful applications, in electronic devices, drug delivery systems, medical devices, sensors and opto-electronics, coating materials, sorbents of different agents for environmental applications, etc. The board range of applications does not come only from the use of graphene or graphene oxide alone, or by its prior functionalization with different moieties, but also it is a building block and an important component in many composite devices, its addition coming with new functionalities on the final composite or strengthening the ones that are already existent on the parent product. An attempt to improve the mechanical properties of polyamide elastomers by compounding with graphene oxide in the parent polymer composition was attempted. The addition of the graphene oxide contributes to the properties of the final product, improving the hardness and aging resistance. Graphene oxide has a lower hardness and textile strength, and if the amount of graphene oxide in the final product is not correctly estimated, it can lead to mechanical properties which are comparable to the starting material or even worse, the graphene oxide agglomerates becoming a tearing point in the final material if the amount added is too high (in a value greater than 3% towards the parent material measured in mass percentages). Two different types of tests were done on the obtained materials, the hardness standard test and the tensile strength standard test, and they were made on the obtained materials before and after the aging process. For the aging process, an accelerated aging was used in order to simulate the effect of natural aging over a long period of time. The accelerated aging was made in extreme heat. For all materials, FT-IR spectra were recorded using FT-IR spectroscopy. From the FT-IR spectra only the bands corresponding to the polyamide were intense, while the characteristic bands for graphene oxide were very small in comparison due to the very small amounts introduced in the final composite along with the low absorptivity of the graphene backbone and limited number of functional groups. In conclusion, some compositions showed very promising results, both in tensile strength test and in hardness tests. The best ratio of graphene to elastomer was between 0.6 and 0.8%, this addition extending the life of the product. Acknowledgements: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project ‘New nanostructured polymeric composites for centre pivot liners, centre plate and other components for the railway industry (RONERANANOSTRUCT)’, No: 18 PTE (PN-III-P2-2.1-PTE-2016-0146) is also acknowledged.Keywords: graphene, graphene oxide, mechanical properties, dopping effect
Procedia PDF Downloads 315271 The Role of Building Information Modeling as a Design Teaching Method in Architecture, Engineering and Construction Schools in Brazil
Authors: Aline V. Arroteia, Gustavo G. Do Amaral, Simone Z. Kikuti, Norberto C. S. Moura, Silvio B. Melhado
Abstract:
Despite the significant advances made by the construction industry in recent years, the crystalized absence of integration between the design and construction phases is still an evident and costly problem in building construction. Globally, the construction industry has sought to adopt collaborative practices through new technologies to mitigate impacts of this fragmented process and to optimize its production. In this new technological business environment, professionals are required to develop new methodologies based on the notion of collaboration and integration of information throughout the building lifecycle. This scenario also represents the industry’s reality in developing nations, and the increasing need for overall efficiency has demanded new educational alternatives at the undergraduate and post-graduate levels. In countries like Brazil, it is the common understanding that Architecture, Engineering and Building Construction educational programs are being required to review the traditional design pedagogical processes to promote a comprehensive notion about integration and simultaneity between the phases of the project. In this context, the coherent inclusion of computation design to all segments of the educational programs of construction related professionals represents a significant research topic that, in fact, can affect the industry practice. Thus, the main objective of the present study was to comparatively measure the effectiveness of the Building Information Modeling courses offered by the University of Sao Paulo, the most important academic institution in Brazil, at the Schools of Architecture and Civil Engineering and the courses offered in well recognized BIM research institutions, such as the School of Design in the College of Architecture of the Georgia Institute of Technology, USA, to evaluate the dissemination of BIM knowledge amongst students in post graduate level. The qualitative research methodology was developed based on the analysis of the program and activities proposed by two BIM courses offered in each of the above-mentioned institutions, which were used as case studies. The data collection instruments were a student questionnaire, semi-structured interviews, participatory evaluation and pedagogical practices. The found results have detected a broad heterogeneity of the students regarding their professional experience, hours dedicated to training, and especially in relation to their general knowledge of BIM technology and its applications. The research observed that BIM is mostly understood as an operational tool and not as methodological project development approach, relevant to the whole building life cycle. The present research offers in its conclusion an assessment about the importance of the incorporation of BIM, with efficiency and in its totality, as a teaching method in undergraduate and graduate courses in the Brazilian architecture, engineering and building construction schools.Keywords: building information modeling (BIM), BIM education, BIM process, design teaching
Procedia PDF Downloads 154270 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 101269 The Metabolism of Built Environment: Energy Flow and Greenhouse Gas Emissions in Nigeria
Authors: Yusuf U. Datti
Abstract:
It is becoming increasingly clear that the consumption of resources now enjoyed in the developed nations will be impossible to be sustained worldwide. While developing countries still have the advantage of low consumption and a smaller ecological footprint per person, they cannot simply develop in the same way as other western cities have developed in the past. The severe reality of population and consumption inequalities makes it contentious whether studies done in developed countries can be translated and applied to developing countries. Additional to this disparities, there are few or no metabolism of energy studies in Nigeria. Rather more contentious majority of energy metabolism studies have been done only in developed countries. While researches in Nigeria concentrate on other aspects/principles of sustainability such as water supply, sewage disposal, energy supply, energy efficiency, waste disposal, etc., which will not accurately capture the environmental impact of energy flow in Nigeria, this research will set itself apart by examining the flow of energy in Nigeria and the impact that the flow will have on the environment. The aim of the study is to examine and quantify the metabolic flows of energy in Nigeria and its corresponding environmental impact. The study will quantify the level and pattern of energy inflow and the outflow of greenhouse emissions in Nigeria. This study will describe measures to address the impact of existing energy sources and suggest alternative renewable energy sources in Nigeria that will lower the emission of greenhouse gas emissions. This study will investigate the metabolism of energy in Nigeria through a three-part methodology. The first step involved selecting and defining the study area and some variables that would affect the output of the energy (time of the year, stability of the country, income level, literacy rate and population). The second step involves analyzing, categorizing and quantifying the amount of energy generated by the various energy sources in the country. The third step involves analyzing what effect the variables would have on the environment. To ensure a representative sample of the study area, Africa’s most populous country, with economy that is the second biggest and that is among the top largest oil producing countries in the world is selected. This is due to the understanding that countries with large economy and dense populations are ideal places to examine sustainability strategies; hence, the choice of Nigeria for the study. National data will be utilized unless where such data cannot be found, then local data will be employed which will be aggregated to reflect the national situation. The outcome of the study will help policy-makers better target energy conservation and efficiency programs and enables early identification and mitigation of any negative effects in the environment.Keywords: built environment, energy metabolism, environmental impact, greenhouse gas emissions and sustainability
Procedia PDF Downloads 183268 Environmental Benefits of Corn Cob Ash in Lateritic Soil Cement Stabilization for Road Works in a Sub-Tropical Region
Authors: Ahmed O. Apampa, Yinusa A. Jimoh
Abstract:
The potential economic viability and environmental benefits of using a biomass waste, such as corn cob ash (CCA) as pozzolan in stabilizing soils for road pavement construction in a sub-tropical region was investigated. Corn cob was obtained from Maya in South West Nigeria and processed to ash of characteristics similar to Class C Fly Ash pozzolan as specified in ASTM C618-12. This was then blended with ordinary Portland cement in the CCA:OPC ratios of 1:1, 1:2 and 2:1. Each of these blends was then mixed with lateritic soil of ASHTO classification A-2-6(3) in varying percentages from 0 – 7.5% at 1.5% intervals. The soil-CCA-Cement mixtures were thereafter tested for geotechnical index properties including the BS Proctor Compaction, California Bearing Ratio (CBR) and the Unconfined Compression Strength Test. The tests were repeated for soil-cement mix without any CCA blending. The cost of the binder inputs and optimal blends of CCA:OPC in the stabilized soil were thereafter analyzed by developing algorithms that relate the experimental data on strength parameters (Unconfined Compression Strength, UCS and California Bearing Ratio, CBR) with the bivariate independent variables CCA and OPC content, using Matlab R2011b. An optimization problem was then set up minimizing the cost of chemical stabilization of laterite with CCA and OPC, subject to the constraints of minimum strength specifications. The Evolutionary Engine as well as the Generalized Reduced Gradient option of the Solver of MS Excel 2010 were used separately on the cells to obtain the optimal blend of CCA:OPC. The optimal blend attaining the required strength of 1800 kN/m2 was determined for the 1:2 CCA:OPC as 5.4% mix (OPC content 3.6%) compared with 4.2% for the OPC only option; and as 6.2% mix for the 1:1 blend (OPC content 3%). The 2:1 blend did not attain the required strength, though over a 100% gain in UCS value was obtained over the control sample with 0% binder. Upon the fact that 0.97 tonne of CO2 is released for every tonne of cement used (OEE, 2001), the reduced OPC requirement to attain the same result indicates the possibility of reducing the net CO2 contribution of the construction industry to the environment ranging from 14 – 28.5% if CCA:OPC blends are widely used in soil stabilization, going by the results of this study. The paper concludes by recommending that Nigeria and other developing countries in the sub-tropics with abundant stock of biomass waste should look in the direction of intensifying the use of biomass waste as fuel and the derived ash for the production of pozzolans for road-works, thereby reducing overall green house gas emissions and in compliance with the objectives of the United Nations Framework on Climate Change.Keywords: corn cob ash, biomass waste, lateritic soil, unconfined compression strength, CO2 emission
Procedia PDF Downloads 375