Search results for: clinical application
747 Insertion of Photovoltaic Energy at Residential Level at Tegucigalpa and Comayagüela, Honduras
Authors: Tannia Vindel, Angel Matute, Erik Elvir, Kelvin Santos
Abstract:
Currently in Honduras, is been incentivized the generation of energy using renewable fonts, such as: hydroelectricity, wind power, biomass and, more recently with the strongest growth, photovoltaic energy. In July 2015 were installed 455.2 MW of photovoltaic energy, increasing by 24% the installed capacity of the national interconnected system existing in 2014, according the National Energy Company (NEC), that made possible reduce the thermoelectric dependency of the system. Given the good results of those large-scale photovoltaic plants, arises the question: is it interesting for the distribution utility and for the consumers the integration of photovoltaic systems in micro-scale in the urban and rural areas? To answer that question has been researched the insertion of photovoltaic energy in the residential sector in Tegucigalpa and Comayagüela (Central District), Honduras to determine the technical and economic viability. Francisco Morazán department, according the National Statistics Institute (NSI), in 2001 had more than 180,000 houses with power service. Tegucigalpa, department and Honduras capital, and Comayagüela, both, have the highest population density in the region, with 1,300,000 habitants in 2014 (NSI). The residential sector in the south-central region of Honduras represents a high percentage being 49% of total consumption, according with NEC in 2014; where 90% of this sector consumes in a range of 0 to 300 kWh / month. All this, in addition to the high level of losses in the transmission and distribution systems, 31.3% in 2014, and the availability of an annual average solar radiation of 5.20 kWh/(m2∙day) according to the NASA, suggests the feasibility of the implementation of photovoltaic systems as a solution to give a level of independency to the households, and besides could be capable of injecting the non-used energy to the grid. The capability of exchange of energy with the grid could make the photovoltaic systems acquisition more affordable to the consumers, because of the compensation energy programs or other kinds of incentives that could be created. Technical viability of the photovoltaic systems insertion has been analyzed, considering the solar radiation monthly average to determine the monthly average of energy that would be generated with the technology accessible locally and the effects of the injection of the energy locally generated on the grid. In addition, the economic viability has been analyzed too, considering the photovoltaic systems high costs, costs of the utility, location and monthly energy consumption requirements of the families. It was found that the inclusion of photovoltaic systems in Tegucigalpa and Comayagüela could decrease in 6 MW the demand for the region if 100% of the households use photovoltaic systems, which acquisition may be more accessible with the help of government incentives and/or the application of energy exchange programs.Keywords: grid connected, photovoltaic, residential, technical analysis
Procedia PDF Downloads 265746 Phytomining for Rare Earth Elements: A Comparative Life Cycle Assessment
Authors: Mohsen Rabbani, Trista McLaughlin, Ehsan Vahidi
Abstract:
the remediation of polluted sites with heavy metals, such as rare earth elements (REEs), has been a primary concern of researchers to decontaminate the soil. Among all developed methods to address this concern, phytoremediation has been established as efficient, cost-effective, easy-to-use, and environmentally friendly way, providing a long-term solution for addressing this global concern. Furthermore, this technology has another great potential application in the metals production sector through returning metals buried in soil via metals cropping. Considering the significant metal concentration in hyper-accumulators, the utilization of bioaccumulated metals to extract metals from plant matter has been proposed as a sub-economic area called phytomining. As a recent, more advanced technology to eliminate such pollutants from the soil and produce critical metals, bioharvesting (phytomining/agromining) has been considered another compromising way to produce metals and meet the global demand for critical/target metals. The bio-ore obtained from phytomining can be safely disposed of or introduced to metal production pathways to obtain the most demanded metals, such as REEs. It is well-known that some hyperaccumulators, e.g., fern Dicranopteris linearis, can be used to absorb REE metals from the polluted soils and accumulate them in plant organs, such as leaves and stems. After soil remediation, the plant species can be harvested and introduced to the downstream steps, namely crushing/grinding, leaching, and purification processes, to extract REEs from plant matter. This novel interdisciplinary field can fill the gap between agriculture, mining, metallurgy, and the environment. Despite the advantages of agromining for the REEs production industry, key issues related to the environmental sustainability of the entire life cycle of this new concept have not been assessed yet. Hence, a comparative life cycle assessment (LCA) study was conducted to quantify the environmental footprints of REEs phytomining. The current LCA study aims to estimate and calculate environmental effects associated with phytomining by considering critical factors, such as climate change, land use, and ozone depletion. The results revealed that phytomining is an easy-to-use and environmentally sustainable approach to either eliminate REEs from polluted sites or produce REEs, offering a new source of such metals production. This LCA research provides guidelines for researchers active in developing a reliable relationship between agriculture, mining, metallurgy, and the environment to encounter soil pollution and keep the earth green and clean.Keywords: phytoremediation, phytomining, life cycle assessment, environmental impacts, rare earth elements, hyperaccumulator
Procedia PDF Downloads 69745 Railway Ballast Volumes Automated Estimation Based on LiDAR Data
Authors: Bahar Salavati Vie Le Sage, Ismaïl Ben Hariz, Flavien Viguier, Sirine Noura Kahil, Audrey Jacquin, Maxime Convert
Abstract:
The ballast layer plays a key role in railroad maintenance and the geometry of the track structure. Ballast also holds the track in place as the trains roll over it. Track ballast is packed between the sleepers and on the sides of railway tracks. An imbalance in ballast volume on the tracks can lead to safety issues as well as a quick degradation of the overall quality of the railway segment. If there is a lack of ballast in the track bed during the summer, there is a risk that the rails will expand and buckle slightly due to the high temperatures. Furthermore, the knowledge of the ballast quantities that will be excavated during renewal works is important for efficient ballast management. The volume of excavated ballast per meter of track can be calculated based on excavation depth, excavation width, volume of track skeleton (sleeper and rail) and sleeper spacing. Since 2012, SNCF has been collecting 3D points cloud data covering its entire railway network by using 3D laser scanning technology (LiDAR). This vast amount of data represents a modelization of the entire railway infrastructure, allowing to conduct various simulations for maintenance purposes. This paper aims to present an automated method for ballast volume estimation based on the processing of LiDAR data. The estimation of abnormal volumes in ballast on the tracks is performed by analyzing the cross-section of the track. Further, since the amount of ballast required varies depending on the track configuration, the knowledge of the ballast profile is required. Prior to track rehabilitation, excess ballast is often present in the ballast shoulders. Based on 3D laser scans, a Digital Terrain Model (DTM) was generated and automatic extraction of the ballast profiles from this data is carried out. The surplus in ballast is then estimated by performing a comparison between this ballast profile obtained empirically, and a geometric modelization of the theoretical ballast profile thresholds as dictated by maintenance standards. Ideally, this excess should be removed prior to renewal works and recycled to optimize the output of the ballast renewal machine. Based on these parameters, an application has been developed to allow the automatic measurement of ballast profiles. We evaluated the method on a 108 kilometers segment of railroad LiDAR scans, and the results show that the proposed algorithm detects ballast surplus that amounts to values close to the total quantities of spoil ballast excavated.Keywords: ballast, railroad, LiDAR , cloud point, track ballast, 3D point
Procedia PDF Downloads 112744 Temperature Dependence of Photoluminescence Intensity of Europium Dinuclear Complex
Authors: Kwedi L. M. Nsah, Hisao Uchiki
Abstract:
Quantum computation is a new and exciting field making use of quantum mechanical phenomena. In classical computers, information is represented as bits, with values either 0 or 1, but a quantum computer uses quantum bits in an arbitrary superposition of 0 and 1, enabling it to reach beyond the limits predicted by classical information theory. lanthanide ion quantum computer is an organic crystal, having a lanthanide ion. Europium is a favored lanthanide, since it exhibits nuclear spin coherence times, and Eu(III) is photo-stable and has two stable isotopes. In a europium organic crystal, the key factor is the mutual dipole-dipole interaction between two europium atoms. Crystals of the complex were formed by making a 2 :1 reaction of Eu(fod)3 and bpm. The transparent white crystals formed showed brilliant red luminescence with a 405 nm laser. The photoluminescence spectroscopy was observed both at room and cryogenic temperatures (300-14 K). The luminescence spectrum of [Eu(fod)3(μ-bpm) Eu(fod)3] showed characteristic of Eu(III) emission transitions in the range 570–630 nm, due to the deactivation of 5D0 emissive state to 7Fj. For the application of dinuclear Eu3+ complex to q-bit device, attention was focused on 5D0 -7F0 transition, around 580 nm. The presence of 5D0 -7F0 transition at room temperature revealed that at least one europium symmetry had no inversion center. Since the line was unsplit by the crystal field effect, any multiplicity observed was due to a multiplicity of Eu3+ sites. For q-bit element, more narrow line width of 5D0 → 7F0 PL band in Eu3+ ion was preferable. Cryogenic temperatures (300 K – 14 K) was applicable to reduce inhomogeneous broadening and distinguish between ions. A CCD image sensor was used for low temperature Photoluminescence measurement, and a far better resolved luminescent spectrum was gotten by cooling the complex at 14 K. A red shift by 15 cm-1 in the 5D0 - 7F0 peak position was observed upon cooling, the line shifted towards lower wavenumber. An emission spectrum at the 5D0 - 7F0 transition region was obtained to verify the line width. At this temperature, a peak with magnitude three times that at room temperature was observed. The temperature change of the 5D0 state of Eu(fod)3(μ-bpm)Eu(fod)3 showed a strong dependence in the vicinity of 60 K to 100 K. Thermal quenching was observed at higher temperatures than 100 K, at which point it began to decrease slowly with increasing temperature. The temperature quenching effect of Eu3+ with increase temperature was caused by energy migration. 100 K was the appropriate temperature for the observation of the 5D0 - 7F0 emission peak. Europium dinuclear complex bridged by bpm was successfully prepared and monitored at cryogenic temperatures. At 100 K the Eu3+-dope complex has a good thermal stability and this temperature is appropriate for the observation of the 5D0 - 7F0 emission peak. Sintering the sample above 600o C could also be a method to consider but the Eu3+ ion can be reduced to Eu2+, reasons why cryogenic temperature measurement is preferably over other methods.Keywords: Eu(fod)₃, europium dinuclear complex, europium ion, quantum bit, quantum computer, 2, 2-bipyrimidine
Procedia PDF Downloads 182743 Spatial Analysis in the Impact of Aquifer Capacity Reduction on Land Subsidence Rate in Semarang City between 2014-2017
Authors: Yudo Prasetyo, Hana Sugiastu Firdaus, Diyanah Diyanah
Abstract:
The phenomenon of the lack of clean water supply in several big cities in Indonesia is a major problem in the development of urban areas. Moreover, in the city of Semarang, the population density and growth of physical development is very high. Continuous and large amounts of underground water (aquifer) exposure can result in a drastically aquifer supply declining in year by year. Especially, the intensity of aquifer use in the fulfilment of household needs and industrial activities. This is worsening by the land subsidence phenomenon in some areas in the Semarang city. Therefore, special research is needed to know the spatial correlation of the impact of decreasing aquifer capacity on the land subsidence phenomenon. This is necessary to give approve that the occurrence of land subsidence can be caused by loss of balance of pressure on below the land surface. One method to observe the correlation pattern between the two phenomena is the application of remote sensing technology based on radar and optical satellites. Implementation of Differential Interferometric Synthetic Aperture Radar (DINSAR) or Small Baseline Area Subset (SBAS) method in SENTINEL-1A satellite image acquisition in 2014-2017 period will give a proper pattern of land subsidence. These results will be spatially correlated with the aquifer-declining pattern in the same time period. Utilization of survey results to 8 monitoring wells with depth in above 100 m to observe the multi-temporal pattern of aquifer change capacity. In addition, the pattern of aquifer capacity will be validated with 2 underground water cavity maps from observation of ministries of energy and natural resources (ESDM) in Semarang city. Spatial correlation studies will be conducted on the pattern of land subsidence and aquifer capacity using overlapping and statistical methods. The results of this correlation will show how big the correlation of decrease in underground water capacity in influencing the distribution and intensity of land subsidence in Semarang city. In addition, the results of this study will also be analyzed based on geological aspects related to hydrogeological parameters, soil types, aquifer species and geological structures. The results of this study will be a correlation map of the aquifer capacity on the decrease in the face of the land in the city of Semarang within the period 2014-2017. So hopefully the results can help the authorities in spatial planning and the city of Semarang in the future.Keywords: aquifer, differential interferometric synthetic aperture radar (DINSAR), land subsidence, small baseline area subset (SBAS)
Procedia PDF Downloads 183742 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 141741 Ecofriendly Synthesis of Au-Ag@AgCl Nanocomposites and Their Catalytic Activity on Multicomponent Domino Annulation-Aromatization for Quinoline Synthesis
Authors: Kanti Sapkota, Do Hyun Lee, Sung Soo Han
Abstract:
Nanocomposites have been widely used in various fields such as electronics, catalysis, and in chemical, biological, biomedical and optical fields. They display broad biomedical properties like antidiabetic, anticancer, antioxidant, antimicrobial and antibacterial activities. Moreover, nanomaterials have been used for wastewater treatment. Particularly, bimetallic hybrid nanocomposites exhibit unique features as compared to their monometallic components. Hybrid nanomaterials not only afford the multifunctionality endowed by their constituents but can also show synergistic properties. In addition, these hybrid nanomaterials have noteworthy catalytic and optical properties. Notably, Au−Ag based nanoparticles can be employed in sensor and catalysis due to their characteristic composition-tunable plasmonic properties. Due to their importance and usefulness, various efforts were developed for their preparation. Generally, chemical methods have been described to synthesize such bimetallic nanocomposites. In such chemical synthesis, harmful and hazardous chemicals cause environmental contamination and increase toxicity levels. Therefore, ecologically benevolent processes for the synthesis of nanomaterials are highly desirable to diminish such environmental and safety concerns. In this regard, here we disclose a simple, cost-effective, external additive free and eco-friendly method for the synthesis of Au-Ag@AgCl nanocomposites using Nephrolepis cordifolia root extract. Au-Ag@AgCl NCs were obtained by the simultaneous reduction of cationic Ag and Au into AgCl in the presence of plant extract. The particle size of 10 to 50 nm was observed with the average diameter of 30 nm. The synthesized nanocomposite was characterized by various modern characterization techniques. For example, UV−visible spectroscopy was used to determine the optical activity of the synthesized NCs, and Fourier transform infrared (FT-IR) spectroscopy was employed to investigate the functional groups present in the biomolecules that were responsible for both reducing and capping agents during the formation of nanocomposites. Similarly, powder X-ray diffraction (XRD), transmission electron microscopy (TEM), X-ray photoelectron spectroscopy (XPS), thermogravimetric analysis (TGA) and energy-dispersive X-ray (EDX) spectroscopy were used to determine crystallinity, size, oxidation states, thermal stability and weight loss of the synthesized nanocomposites. As a synthetic application, the synthesized nanocomposite exhibited excellent catalytic activity for the multicomponent synthesis of biologically interesting quinoline molecules via domino annulation-aromatization reaction of aniline, arylaldehyde, and phenyl acetylene derivatives. Interestingly, the nanocatalyst was efficiently recycled for five times without substantial loss of catalytic properties.Keywords: nanoparticles, catalysis, multicomponent, quinoline
Procedia PDF Downloads 128740 Reproductive Governmentality in Mexico: Production, Control and Regulation of Contraceptive Practices in a Public Hospital
Authors: Ivan Orozco
Abstract:
Introduction: Forced contraception constitutes part of an effort to control the life and reproductive capacity of women through public health institutions. This phenomenon has affected many Mexican women historically and still persists nowadays. The notion of reproductive governmentality refers to the mechanisms through which different historical configurations of social actors (state institutions, churches, donor agents, NGOs, etc.) use legislative controls, economic incentives, moral mandates, direct coercion, and ethical incitements, to produce, monitor and control reproductive behaviors and practices. This research focuses on the use of these mechanisms by the Mexican State to control women's contraceptive practices in a public hospital. Method: An Institutional Ethnography was carried out, with the objective of knowing women's experiences from their own perspective, as they occur in their daily lives, but at the same time, discovering the structural elements that shape the discourses that promote women's contraception, even against their will. The fieldwork consisted in an observation of the dynamics between different participants within a public hospital and the conduction of interviews with the medical and nursing staff in charge of family planning services, as well as women attending the family planning office. Results: Public health institutions in Mexico are state tools to control and regulate reproduction. There are several strategies that are used for this purpose, for example, health personnel provide insufficient or misleading information to ensure that women agree to use contraceptives; health institutions provide economic incentives to the members of the health staff who reach certain goals in terms of contraceptive placement; young women are forced to go to the family planning service, regardless of the reason they went to the clinic; health campaigns are carried out, consisting of the application of contraceptives outside the health facilities, directly in the communities of people who visit the hospital less frequently. All these mechanisms seek for women to use contraceptives, from the women’s perspective; however, the reception of these discourses is ambiguous. While, for some women, the strategies become coercive mechanisms to use contraceptives against their will, for others, they represent an opportunity to take control over their reproductive lives. Conclusion: Since 1974, the Mexican government has implemented campaigns for the promotion of family planning methods as a means to control population growth. Although it is established in several legislations that the counselling must be carried out with a gender and human rights perspective, always respecting the autonomy of people, these research testify that health personnel uses different strategies to force some women to use contraceptive methods, thereby violating their reproductive rights.Keywords: feminist research, forced contraception, institutional ethnography, reproductive. governmentality
Procedia PDF Downloads 166739 Medicinal Plants: An Antiviral Depository with Complex Mode of Action
Authors: Daniel Todorov, Anton Hinkov, Petya Angelova, Kalina Shishkova, Venelin Tsvetkov, Stoyan Shishkov
Abstract:
Human herpes viruses (HHV) are ubiquitous pathogens with a pandemic spread across the globe. HHV type 1 is the main causative agent of cold sores and fever blisters around the mouth and on the face, whereas HHV type 2 is generally responsible for genital herpes outbreaks. The treatment of both viruses is more or less successful with antivirals from the nucleoside analogues group. Their wide application increasingly leads to the emergence of resistant mutants In the past, medicinal plants have been used to treat a number of infectious and non-infectious diseases. Their diversity and ability to produce the vast variety of secondary metabolites according to the characteristics of the environment give them the potential to help us in our warfare with viral infections. The variable chemical characteristics and complex composition is an advantage in the treatment of herpes since the emergence of resistant mutants is significantly complicated. The screening process is difficult due to the lack of standardization. That is why it is especially important to follow the mechanism of antiviral action of plants. On the one hand, it may be expected to interact with its compounds, resulting in enhanced antiviral effects, and the most appropriate environmental conditions can be chosen to maximize the amount of active secondary metabolites. During our study, we followed the activity of various plant extracts on the viral replication cycle as well as their effect on the extracellular virion. We obtained our results following the logical sequence of the experimental settings - determining the cytotoxicity of the extracts, evaluating the overall effect on viral replication and extracellular virion.During our research, we have screened a variety of plant extracts for their antiviral activity against both virus replication and the virion itself. We investigated the effect of the extracts on the individual stages of the viral replication cycle - viral adsorption, penetration and the effect on replication depending on the time of addition. If there are positive results in the later experiments, we had studied the activity over viral adsorption, penetration and the effect of replication according to the time of addition. Our results indicate that some of the extracts from the Lamium album have several targets. The first stages of the viral life cycle are most affected. Several of our active antiviral agents have shown an effect on extracellular virion and adsorption and penetration processes. Our research over the last decade has shown several curative antiviral plants - some of which are from the Lamiacea family. The rich set of active ingredients of the plants in this family makes them a good source of antiviral preparation.Keywords: human herpes virus, antiviral activity, Lamium album, Nepeta nuda
Procedia PDF Downloads 157738 Gravitational Water Vortex Power Plant: Experimental-Parametric Design of a Hydraulic Structure Capable of Inducing the Artificial Formation of a Gravitational Water Vortex Appropriate for Hydroelectric Generation
Authors: Henrry Vicente Rojas Asuero, Holger Manuel Benavides Muñoz
Abstract:
Approximately 80% of the energy consumed worldwide is generated from fossil sources, which are responsible for the emission of a large volume of greenhouse gases. For this reason, the global trend, at present, is the widespread use of energy produced from renewable sources. This seeks safety and diversification of energy supply, based on social cohesion, economic feasibility and environmental protection. In this scenario, small hydropower systems (P ≤ 10MW) stand out due to their high efficiency, economic competitiveness and low environmental impact. Small hydropower systems, along with wind and solar energy, are expected to represent a significant percentage of the world's energy matrix in the near term. Among the various technologies present in the state of the art, relating to small hydropower systems, is the Gravitational Water Vortex Power Plant, a recent technology that excels because of its versatility of operation, since it can operate with jumps in the range of 0.70 m-2.00 m and flow rates from 1 m3/s to 20 m3/s. Its operating system is based on the utilization of the energy of rotation contained within a large water vortex artificially induced. This paper presents the study and experimental design of an optimal hydraulic structure with the capacity to induce the artificial formation of a gravitational water vortex trough a system of easy application and high efficiency, able to operate in conditions of very low head and minimum flow. The proposed structure consists of a channel, with variable base, vortex inductor, tangential flow generator, coupled to a circular tank with a conical transition bottom hole. In the laboratory test, the angular velocity of the water vortex was related to the geometric characteristics of the inductor channel, as well as the influence of the conical transition bottom hole on the physical characteristics of the water vortex. The results show angular velocity values of greater magnitude as a function of depth, in addition the presence of the conical transition in the bottom hole of the circular tank improves the water vortex formation conditions while increasing the angular velocity values. Thus, the proposed system is a sustainable solution for the energy supply of rural areas near to watercourses.Keywords: experimental model, gravitational water vortex power plant, renewable energy, small hydropower
Procedia PDF Downloads 291737 Groundwater Quality Assessment in the Vicinity of Tannery Industries in Warangal, India
Authors: Mohammed Fathima Shahanaaz, Shaik Fayazuddin, M. Uday Kiran
Abstract:
Groundwater quality is deteriorating day by day in different parts of the world due to various reasons, toxic chemicals are being discharged without proper treatment into inland water bodies and land which in turn add pollutants to the groundwater. In this kind of situation, the rural communities which do not have municipal drinking water have to rely on groundwater though it is polluted for various uses. Tannery industry is one of the major industry which provides economy and employment to India. Since most of the developed countries stopped using chemicals which are toxic, the tanning industry which uses chromium as its major element are being shifted towards developing countries. Most of the tanning industries in India can be found in clusters concentrated mainly in states of Tamilnadu, West Bengal, Uttar Pradesh and limited places of Punjab. Limited work is present in the case of tanneries of Warangal. There exists 18 group of tanneries in Desaipet, Enamamula region of Warangal, out of which 4 are involved in dry process and are low responsible for groundwater pollution. These units of tanneries are discharging their effluents after treatment into Sai Cheruvu. Though the treatment effluents are being discharged, the Sai Cheruvu is turned in to Pink colour, with higher levels of BOD, COD, chromium, chlorides, total hardness, TDS and sulphates. An attempt was made to analyse the groundwater samples around this polluted Sai Cheruvu region since literature shows that a single tannery can pollute groundwater to a radius of 7-8 kms from the point of disposal. Sample are collected from 6 different locations around Sai Cheruvu. Analysis was performed for determining various constituents in groundwater such as pH, EC, TDS, TH, Ca+2, Mg+2, HCO3-, Na+, K+, Cl-, SO42-, NO3-, F and Cr+6. The analysis of these constitutes gave values greater than permissible limits. Even chromium is also present in groundwater samples which is exceeding permissible limits People in Paidepally and Sardharpeta villages already stopped the usage of groundwater. They are buying bottle water for drinking purpose. Though they are not using groundwater for drinking purpose complaints are made about using this water for washing also. So treatment process should be adopted for groundwater which should be simple and efficient. In this study rice husk silica (RHS) is used to treat pollutants in groundwater with varying dosages of RHS and contact time. Rice husk is treated, dried and place in a muffle furnace for 6 hours at 650°C. Reduction is observed in total hardness, chlorides and chromium levels are observed after the application RHS. Pollutants reached permissible limits for 27.5mg/l and 50 mg/l of dosage for a contact time of 130 min at constant pH and temperature.Keywords: chromium, groundwater, rice husk silica, tanning industries
Procedia PDF Downloads 202736 Human Interaction Skills and Employability in Courses with Internships: Report of a Decade of Success in Information Technology
Authors: Filomena Lopes, Miguel Magalhaes, Carla Santos Pereira, Natercia Durao, Cristina Costa-Lobo
Abstract:
The option to implement curricular internships with undergraduate students is a pedagogical option with some good results perceived by academic staff, employers, and among graduates in general and IT (Information Technology) in particular. Knowing that this type of exercise has never been so relevant, as one tries to give meaning to the future in a landscape of rapid and deep changes. We have as an example the potential disruptive impact on the jobs of advances in robotics, artificial intelligence and 3-D printing, which is a focus of fierce debate. It is in this context that more and more students and employers engage in the pursuit of career-promoting responses and business development, making their investment decisions of training and hiring. Three decades of experience and research in computer science degree and in information systems technologies degree at the Portucalense University, Portuguese private university, has provided strong evidence of its advantages. The Human Interaction Skills development as well as the attractiveness of such experiences for students are topics assumed as core in the Ccnception and management of the activities implemented in these study cycles. The objective of this paper is to gather evidence of the Human Interaction Skills explained and valued within the curriculum internship experiences of IT students employability. Data collection was based on the application of questionnaire to intern counselors and to students who have completed internships in these undergraduate courses in the last decade. The trainee supervisor, responsible for monitoring the performance of IT students in the evolution of traineeship activities, evaluates the following Human Interaction Skills: Motivation and interest in the activities developed, interpersonal relationship, cooperation in company activities, assiduity, ease of knowledge apprehension, Compliance with norms, insertion in the work environment, productivity, initiative, ability to take responsibility, creativity in proposing solutions, and self-confidence. The results show that these undergraduate courses promote the development of Human Interaction Skills and that these students, once they finish their degree, are able to initiate remunerated work functions, mainly by invitation of the institutions in which they perform curricular internships. Findings obtained from the present study contribute to widen the analysis of its effectiveness in terms of future research and actions in regard to the transition from Higher Education pathways to the Labour Market.Keywords: human interaction skills, employability, internships, information technology, higher education
Procedia PDF Downloads 290735 Carbon Capture and Storage Using Porous-Based Aerogel Materials
Authors: Rima Alfaraj, Abeer Alarawi, Murtadha AlTammar
Abstract:
The global energy landscape heavily relies on the oil and gas industry, which faces the critical challenge of reducing its carbon footprint. To address this issue, the integration of advanced materials like aerogels has emerged as a promising solution to enhance sustainability and environmental performance within the industry. This study thoroughly examines the application of aerogel-based technologies in the oil and gas sector, focusing particularly on their role in carbon capture and storage (CCS) initiatives. Aerogels, known for their exceptional properties, such as high surface area, low density, and customizable pore structure, have garnered attention for their potential in various CCS strategies. The review delves into various fabrication techniques utilized in producing aerogel materials, including sol-gel, supercritical drying, and freeze-drying methods, to assess their suitability for specific industry applications. Beyond fabrication, the practicality of aerogel materials in critical areas such as flow assurance, enhanced oil recovery, and thermal insulation is explored. The analysis spans a wide range of applications, from potential use in pipelines and equipment to subsea installations, offering valuable insights into the real-world implementation of aerogels in the oil and gas sector. The paper also investigates the adsorption and storage capabilities of aerogel-based sorbents, showcasing their effectiveness in capturing and storing carbon dioxide (CO₂) molecules. Optimization of pore size distribution and surface chemistry is examined to enhance the affinity and selectivity of aerogels towards CO₂, thereby improving the efficiency and capacity of CCS systems. Additionally, the study explores the potential of aerogel-based membranes for separating and purifying CO₂ from oil and gas streams, emphasizing their role in the carbon capture and utilization (CCU) value chain in the industry. Emerging trends and future perspectives in integrating aerogel-based technologies within the oil and gas sector are also discussed, including the development of hybrid aerogel composites and advanced functional components to further enhance material performance and versatility. By synthesizing the latest advancements and future directions in aerogel used for CCS applications in the oil and gas industry, this review offers a comprehensive understanding of how these innovative materials can aid in transitioning towards a more sustainable and environmentally conscious energy landscape. The insights provided can assist in strategic decision-making, drive technology development, and foster collaborations among academia, industry, and policymakers to promote the widespread adoption of aerogel-based solutions in the oil and gas sector.Keywords: CCS, porous, carbon capture, oil and gas, sustainability
Procedia PDF Downloads 45734 Development and Characterization of Novel Topical Formulation Containing Niacinamide
Authors: Sevdenur Onger, Ali Asram Sagiroglu
Abstract:
Hyperpigmentation is a cosmetically unappealing skin problem caused by an overabundance of melanin in the skin. Its pathophysiology is caused by melanocytes being exposed to paracrine melanogenic stimuli, which can upregulate melanogenesis-related enzymes (such as tyrosinase) and cause melanosome formation. Tyrosinase is linked to the development of melanosomes biochemically, and it is the main target of hyperpigmentation treatment. therefore, decreasing tyrosinase activity to reduce melanosomes has become the main target of hyperpigmentation treatment. Niacinamide (NA) is a natural chemical found in a variety of plants that is used as a skin-whitening ingredient in cosmetic formulations. NA decreases melanogenesis in the skin by inhibiting melanosome transfer from melanocytes to covering keratinocytes. Furthermore, NA protects the skin from reactive oxygen species and acts as a main barrier with the skin, reducing moisture loss by increasing ceramide and fatty acid synthesis. However, it is very difficult for hydrophilic compounds such as NA to penetrate deep into the skin. Furthermore, because of the nicotinic acid in NA, it is an irritant. As a result, we've concentrated on strategies to increase NA skin permeability while avoiding its irritating impacts. Since nanotechnology can affect drug penetration behavior by controlling the release and increasing the period of permanence on the skin, it can be a useful technique in the development of whitening formulations. Liposomes have become increasingly popular in the cosmetics industry in recent years due to benefits such as their lack of toxicity, high penetration ability in living skin layers, ability to increase skin moisture by forming a thin layer on the skin surface, and suitability for large-scale production. Therefore, liposomes containing NA were developed for this study. Different formulations were prepared by varying the amount of phospholipid and cholesterol and examined in terms of particle sizes, polydispersity index (PDI) and pH values. The pH values of the produced formulations were determined to be suitable with the pH value of the skin. Particle sizes were determined to be smaller than 250 nm and the particles were found to be of homogeneous size in the formulation (pdi<0.30). Despite the important advantages of liposomal systems, they have low viscosity and stability for topical use. For these reasons, in this study, liposomal cream formulations have been prepared for easy topical application of liposomal systems. As a result, liposomal cream formulations containing NA have been successfully prepared and characterized. Following the in-vitro release and ex-vivo diffusion studies to be conducted in the continuation of the study, it is planned to test the formulation that gives the most appropriate result on the volunteers after obtaining the approval of the ethics committee.Keywords: delivery systems, hyperpigmentation, liposome, niacinamide
Procedia PDF Downloads 112733 A Hydrometallurgical Route for the Recovery of Molybdenum from Spent Mo-Co Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum has increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. The present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3.0 mol/L HCl, and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2.0 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe- Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by countercurrent simulation studies. According to McCabe- Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two-stage counter current at A/O= 1:1 with the negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO₃ in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO₃ was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO₃ correspond to molybdite Syn-MoO₃ structure. FE-SEM depicts the rod-like morphology of synthesized MoO₃. EDX analysis of MoO₃ shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO₃ can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as a catalyst.Keywords: cyphos Il 102, extraction, spent mo-co catalyst, recovery
Procedia PDF Downloads 173732 Combining Nitrocarburisation and Dry Lubrication for Improving Component Lifetime
Authors: Kaushik Vaideeswaran, Jean Gobet, Patrick Margraf, Olha Sereda
Abstract:
Nitrocarburisation is a surface hardening technique often applied to improve the wear resistance of steel surfaces. It is considered to be a promising solution in comparison with other processes such as flame spraying, owing to the formation of a diffusion layer which provides mechanical integrity, as well as its cost-effectiveness. To improve other tribological properties of the surface such as the coefficient of friction (COF), dry lubricants are utilized. Currently, the lifetime of steel components in many applications using either of these techniques individually are faced with the limitations of the two: high COF for nitrocarburized surfaces and low wear resistance of dry lubricant coatings. To this end, the current study involves the creation of a hybrid surface using the impregnation of a dry lubricant on to a nitrocarburized surface. The mechanical strength and hardness of Gerster SA’s nitrocarburized surfaces accompanied by the impregnation of the porous outermost layer with a solid lubricant will create a hybrid surface possessing both outstanding wear resistance and a low friction coefficient and with high adherence to the substrate. Gerster SA has the state-of-the-art technology for the surface hardening of various steels. Through their expertise in the field, the nitrocarburizing process parameters (atmosphere, temperature, dwelling time) were optimized to obtain samples that have a distinct porous structure (in terms of size, shape, and density) as observed by metallographic and microscopic analyses. The porosity thus obtained is suitable for the impregnation of a dry lubricant. A commercially available dry lubricant with a thermoplastic matrix was employed for the impregnation process, which was optimized to obtain a void-free interface with the surface of the nitrocarburized layer (henceforth called hybrid surface). In parallel, metallic samples without nitrocarburisation were also impregnated with the same dry lubricant as a reference (henceforth called reference surface). The reference and the nitrocarburized surfaces, with and without the dry lubricant were tested for their tribological behavior by sliding against a quenched steel ball using a nanotribometer. Without any lubricant, the nitrocarburized surface showed a wear rate 5x lower than the reference metal. In the presence of a thin film of dry lubricant ( < 2 micrometers) and under the application of high loads (500 mN or ~800 MPa), while the COF for the reference surface increased from ~0.1 to > 0.3 within 120 m, the hybrid surface retained a COF < 0.2 for over 400m of sliding. In addition, while the steel ball sliding against the reference surface showed heavy wear, the corresponding ball sliding against the hybrid surface showed very limited wear. Observations of the sliding tracks in the hybrid surface using Electron Microscopy show the presence of the nitrocarburized nodules as well as the lubricant, whereas no traces of the lubricant were found in the sliding track on the reference surface. In this manner, the clear advantage of combining nitrocarburisation with the impregnation of a dry lubricant towards forming a hybrid surface has been demonstrated.Keywords: dry lubrication, hybrid surfaces, improved wear resistance, nitrocarburisation, steels
Procedia PDF Downloads 122731 Strategies for Incorporating Intercultural Intelligence into Higher Education
Authors: Hyoshin Kim
Abstract:
Most post-secondary educational institutions have offered a wide variety of professional development programs and resources in order to advance the quality of education. Such programs are designed to support faculty members by focusing on topics such as course design, behavioral learning objectives, class discussion, and evaluation methods. These are based on good intentions and might help both new and experienced educators. However, the fundamental flaw is that these ‘effective methods’ are assumed to work regardless of what we teach and whom we teach. This paper is focused on intercultural intelligence and its application to education. It presents a comprehensive literature review on context and cultural diversity in terms of beliefs, values and worldviews. What has worked well with a group of homogeneous local students may not work well with more diverse and international students. It is because students hold different notions of what is means to learn or know something. It is necessary for educators to move away from certain sets of generic teaching skills, which are based on a limited, particular view of teaching and learning. The main objective of the research is to expand our teaching strategies by incorporating what students bring to the course. There have been a growing number of resources and texts on teaching international students. Unfortunately, they tend to be based on the deficiency model, which treats diversity not as strengths, but as problems to be solved. This view is evidenced by the heavy emphasis on assimilationist approaches. For example, cultural difference is negatively evaluated, either implicitly or explicitly. Therefore the pressure is on culturally diverse students. The following questions reflect the underlying assumption of deficiencies: - How can we make them learn better? - How can we bring them into the mainstream academic culture?; and - How can they adapt to Western educational systems? Even though these questions may be well-intended, there seems to be something fundamentally wrong as the assumption of cultural superiority is embedded in this kind of thinking. This paper examines how educators can incorporate intercultural intelligence into the course design by utilizing a variety of tools such as pre-course activities, peer learning and reflective learning journals. The main goal is to explore ways to engage diverse learners in all aspects of learning. This can be achieved by activities designed to understand their prior knowledge, life experiences, and relevant cultural identities. It is crucial to link course material to students’ diverse interests thereby enhancing the relevance of course content and making learning more inclusive. Internationalization of higher education can be successful only when cultural differences are respected and celebrated as essential and positive aspects of teaching and learning.Keywords: intercultural competence, intercultural intelligence, teaching and learning, post-secondary education
Procedia PDF Downloads 212730 Comparison between Conventional Bacterial and Algal-Bacterial Aerobic Granular Sludge Systems in the Treatment of Saline Wastewater
Authors: Philip Semaha, Zhongfang Lei, Ziwen Zhao, Sen Liu, Zhenya Zhang, Kazuya Shimizu
Abstract:
The increasing generation of saline wastewater through various industrial activities is becoming a global concern for activated sludge (AS) based biological treatment which is widely applied in wastewater treatment plants (WWTPs). As for the AS process, an increase in wastewater salinity has negative impact on its overall performance. The advent of conventional aerobic granular sludge (AGS) or bacterial AGS biotechnology has gained much attention because of its superior performance. The development of algal-bacterial AGS could enhance better nutrients removal, potentially reduce aeration cost through symbiotic algae-bacterial activity, and thus, can also reduce overall treatment cost. Nonetheless, the potential of salt stress to decrease biomass growth, microbial activity and nutrient removal exist. Up to the present, little information is available on saline wastewater treatment by algal-bacterial AGS. To the authors’ best knowledge, a comparison of the two AGS systems has not been done to evaluate nutrients removal capacity in the context of salinity increase. This study sought to figure out the impact of salinity on the algal-bacterial AGS system in comparison to bacterial AGS one, contributing to the application of AGS technology in the real world of saline wastewater treatment. In this study, the salt concentrations tested were 0 g/L, 1 g/L, 5 g/L, 10 g/L and 15 g/L of NaCl with 24-hr artificial illuminance of approximately 97.2 µmol m¯²s¯¹, and mature bacterial and algal-bacterial AGS were used for the operation of two identical sequencing batch reactors (SBRs) with a working volume of 0.9 L each, respectively. The results showed that salinity increase caused no apparent change in the color of bacterial AGS; while for algal-bacterial AGS, its color was progressively changed from green to dark green. A consequent increase in granule diameter and fluffiness was observed in the bacterial AGS reactor with the increase of salinity in comparison to a decrease in algal-bacterial AGS diameter. However, nitrite accumulation peaked from 1.0 mg/L and 0.4 mg/L at 1 g/L NaCl in the bacterial and algal-bacterial AGS systems, respectively to 9.8 mg/L in both systems when NaCl concentration varied from 5 g/L to 15 g/L. Almost no ammonia nitrogen was detected in the effluent except at 10 g/L NaCl concentration, where it averaged 4.2 mg/L and 2.4 mg/L, respectively, in the bacterial and algal-bacterial AGS systems. Nutrients removal in the algal-bacterial system was relatively higher than the bacterial AGS in terms of nitrogen and phosphorus removals. Nonetheless, the nutrient removal rate was almost 50% or lower. Results show that algal-bacterial AGS is more adaptable to salinity increase and could be more suitable for saline wastewater treatment. Optimization of operation conditions for algal-bacterial AGS system would be important to ensure its stably high efficiency in practice.Keywords: algal-bacterial aerobic granular sludge, bacterial aerobic granular sludge, Nutrients removal, saline wastewater, sequencing batch reactor
Procedia PDF Downloads 148729 Optimizing the Doses of Chitosan/Tripolyphosphate Loaded Nanoparticles of Clodinofop Propargyl and Fenoxaprop-P-Ethyl to Manage Avena Fatua L.: An Environmentally Safer Alternative to Control Weeds
Authors: Muhammad Ather Nadeem, Bilal Ahmad Khan, Hussam F. Najeeb Alawadi, Athar Mahmood, Aneela Nijabat, Tasawer Abbas, Muhammad Habib, Abdullah
Abstract:
The global prevalence of Avena fatua infestation poses a significant challenge to wheat sustainability. While chemical control stands out as an efficient and rapid way to control weeds, concerns over developing resistance in weeds and environmental pollution have led to criticisms of herbicide use. Consequently, this study was designed to address these challenges through the chemical synthesis, characterization, and optimization of chitosan-based nanoparticles containing clodinofop Propargyl and fenoxaprop-P-ethyl for the effective management of A. fatua. Utilizing the ionic gelification technique, chitosan-based nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl were prepared. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses. These nanoparticles were applied at the 3-4 leaf stage of Phalaris minor weed, applying seven altered doses (D0 (Check weeds), D1 (Recommended dose of traditional-herbicide (TH), D2 (Recommended dose of Nano-herbicide (NPs-H)), D3 (NPs-H with 05-fold lower dose), D4 ((NPs-H) with 10-fold lower dose), D5 (NPs-H with 15-fold lower dose), and D6 (NPs-H with 20-fold lower dose)). Characterization of the chitosan-containing herbicide nanoparticles (CHT-NPs) was conducted using FT-IR analysis, demonstrating a perfect match with standard parameters. UV–visible spectrum further revealed absorption peaks at 310 nm for NPs of clodinofop propargyl and at 330 nm for NPs of fenoxaprop-p-ethyl. This research aims to contribute to sustainable weed management practices by addressing the challenges associated with chemical herbicide use. The application of chitosan-based nanoparticles (CHT-NPs) containing fenoxaprop-P-ethyl and clodinofop-propargyl at the recommended dose of the standard herbicide resulted in 100% mortality and visible injury to weeds. Surprisingly, when applied at a lower dose with 5-folds, these chitosan-containing nanoparticles of clodinofop Propargyl and fenoxaprop-P-ethyl demonstrated extreme control efficacy. Furthermore, at a 10-fold lower dose compared to standard herbicides and the recommended dose of clodinofop-propargyl and fenoxaprop-P-ethyl, the chitosan-based nanoparticles exhibited comparable effects on chlorophyll content, visual injury (%), mortality (%), plant height (cm), fresh weight (g), and dry weight (g) of A. fatua. This study indicates that chitosan/tripolyphosphate-loaded nanoparticles containing clodinofop-propargyl and fenoxaprop-P-ethyl can be effectively utilized for the management of A. fatua at a 10-fold lower dose, highlighting their potential for sustainable and efficient weed control.Keywords: mortality, chitosan-based nanoparticles, visual injury, chlorophyl contents, 5-fold lower dose.
Procedia PDF Downloads 56728 Cognitive Linguistic Features Underlying Spelling Development in a Second Language: A Case Study of L2 Spellers in South Africa
Authors: A. Van Staden, A. Tolmie, E. Vorster
Abstract:
Research confirms the multifaceted nature of spelling development and underscores the importance of both cognitive and linguistic skills that affect sound spelling development such as working and long-term memory, phonological and orthographic awareness, mental orthographic images, semantic knowledge and morphological awareness. This has clear implications for many South African English second language spellers (L2) who attempt to become proficient spellers. Since English has an opaque orthography, with irregular spelling patterns and insufficient sound/grapheme correspondences, L2 spellers can neither rely, nor draw on the phonological awareness skills of their first language (for example Sesotho and many other African languages), to assist them to spell the majority of English words. Epistemologically, this research is informed by social constructivism. In addition the researchers also hypothesized that the principles of the Overlapping Waves Theory was an appropriate lens through which to investigate whether L2 spellers could significantly improve their spelling skills via the implementation of an alternative route to spelling development, namely the orthographic route, and more specifically via the application of visual imagery. Post-test results confirmed the results of previous research that argues for the interactive nature of different cognitive and linguistic systems such as working memory and its subsystems and long-term memory, as learners were systematically guided to store visual orthographic images of words in their long-term lexicons. Moreover, the results have shown that L2 spellers in the experimental group (n = 9) significantly outperformed L2 spellers (n = 9) in the control group whose intervention involved phonological awareness (and coding) including the teaching of spelling rules. Consequently, L2 learners in the experimental group significantly improved in all the post-test measures included in this investigation, namely the four sub-tests of short-term memory; as well as two spelling measures (i.e. diagnostic and standardized measures). Against this background, the findings of this study look promising and have shown that, within a social-constructivist learning environment, learners can be systematically guided to apply higher-order thinking processes such as visual imagery to successfully store and retrieve mental images of spelling words from their output lexicons. Moreover, results from the present study could play an important role in directing research into this under-researched aspect of L2 literacy development within the South African education context.Keywords: English second language spellers, phonological and orthographic coding, social constructivism, visual imagery as spelling strategy
Procedia PDF Downloads 361727 Development of Social Competence in the Preparation and Continuing Training of Adult Educators
Authors: Genute Gedviliene, Vidmantas Tutlys
Abstract:
The aim of this paper is to reveal the deployment and development of the social competence in the higher education programmes of adult education and in the continuing training and competence development of the andragogues. There will be compared how the issues of cooperation and communication in the learning and teaching processes are treated in the study programmes and in the courses of continuing training of andragogues. Theoretical and empirical research methods were combined for research analysis. For the analysis the following methods were applied: 1) Literature and document analysis helped to highlight the communication and cooperation as fundamental phenomena of the social competence, it’s important for the adult education in the context of digitalization and globalization. There were also analyzed the research studies on the development of social competence in the field of andragogy, as well as on the place and weight of the social competence in the overall competence profile of the andragogue. 2) The empirical study is based on questionnaire survey method. The population of survey consists of 240 students of bachelor and master degree studies of andragogy in Lithuania and of 320 representatives of the different bodies and institutions involved in the continuing training and professional development of the adult educators in Lithuania. The themes of survey questionnaire were defined on the basis of findings of the literature review and included the following: 1) opinions of the respondents on the role and place of a social competence in the work of andragogue; 2) opinions of the respondents on the role and place of the development of social competence in the curricula of higher education studies and continuing training courses; 3) judgements on the implications of the higher education studies and courses of continuing training for the development of social competence and it’s deployment in the work of andragogue. Data analysis disclosed a wide range of ways and modalities of the deployment and development of social competence in the preparation and continuing training of the adult educators. Social competence is important for the students and adult education providers not only as the auxiliary capability for the communication and transfer of information, but also as the outcome of collective learning leading to the development of new capabilities applied by the learners in the learning process, their professional field of adult education and their social life. Equally so, social competence is necessary for the effective adult education activities not only as an auxiliary capacity applied in the teaching process, but also as a potential for improvement, development and sustainability of the didactic competence and know-how in this field. The students of the higher education programmes in the field of adult education treat social competence as important generic capacity important for the work of adult educator, whereas adult education providers discern the concrete issues of application of social competence in the different processes of adult education, starting from curriculum design and ending with assessment of learning outcomes.Keywords: adult education, andragogues, social competence, curriculum
Procedia PDF Downloads 146726 The Two Question Challenge: Embedding the Serious Illness Conversation in Acute Care Workflows
Authors: D. M. Lewis, L. Frisby, U. Stead
Abstract:
Objective: Many patients are receiving invasive treatments in acute care or are dying in hospital without having had comprehensive goals of care conversations. Some of these treatments may not align with the patient’s wishes, may be futile, and may cause unnecessary suffering. While many staff may recognize the benefits of engaging patients and families in Serious Illness Conversations (a goal of care framework developed by Ariadne Labs in Boston), few staff feel confident and/or competent in having these conversations in acute care. Another barrier to having these conversations may be due to a lack of incorporation in the current workflow. An educational exercise, titled the Two Question Challenge, was initiated on four medical units across two Vancouver Coastal Health (VCH) hospitals in attempt to engage the entire interdisciplinary team in asking patients and families questions around goals of care and to improve the documentation of these expressed wishes and preferences. Methods: Four acute care units across two separate hospitals participated in the Two Question Challenge. On each unit, over the course of two eight-hour shifts, all members of the interdisciplinary team were asked to select at least two questions from a selection of nine goals of care questions. They were asked to pose these questions of a patient or family member throughout their shift and then asked to document their conversations in a centralized Advance Care Planning/Goals of Care discussion record in the patient’s chart. A visual representation of conversation outcomes was created to demonstrate to staff and patients the breadth of conversations that took place throughout the challenge. Staff and patients were interviewed about their experiences throughout the challenge. Two palliative approach leads remained present on the units throughout the challenge to support, guide, or role model these conversations. Results: Across four acute care medical units, 47 interdisciplinary staff participated in the Two Question Challenge, including nursing, allied health, and a physician. A total of 88 questions were asked of patients, or their families around goals of care and 50 newly documented goals of care conversations were charted. Two code statuses were changed as a result of the conversations. Patients voiced an appreciation for these conversations and staff were able to successfully incorporate these questions into their daily care. Conclusion: The Two Question Challenge proved to be an effective way of having teams explore the goals of care of patients and families in an acute care setting. Staff felt that they gained confidence and competence. Both staff and patients found these conversations to be meaningful and impactful and felt they were notably different from their usual interactions. Documentation of these conversations in a centralized location that is easily accessible to all care providers increased significantly. Application of the Two Question Challenge in non-medical units or other care settings, such as long-term care facilities or community health units, should be explored in the future.Keywords: advance care planning, goals of care, interdisciplinary, palliative approach, serious illness conversations
Procedia PDF Downloads 102725 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 101724 Change of Substrate in Solid State Fermentation Can Produce Proteases and Phytases with Extremely Distinct Biochemical Characteristics and Promising Applications for Animal Nutrition
Authors: Paula K. Novelli, Margarida M. Barros, Luciana F. Flueri
Abstract:
Utilization of agricultural by-products, wheat ban and soybean bran, as substrate for solid state fermentation (SSF) was studied, aiming the achievement of different enzymes from Aspergillus sp. with distinct biological characteristics and its application and improvement on animal nutrition. Aspergillus niger and Aspergillus oryzea were studied as they showed very high yield of phytase and protease production, respectively. Phytase activity was measure using p-nitrophenilphosphate as substrate and a standard curve of p-nitrophenol, as the enzymatic activity unit was the quantity of enzyme necessary to release one μmol of p-nitrophenol. Protease activity was measure using azocasein as substrate. Activity for phytase and protease substantially increased when the different biochemical characteristics were considered in the study. Optimum pH and stability of the phytase produced by A. niger with wheat bran as substrate was between 4.0 - 5.0 and optimum temperature of activity was 37oC. Phytase fermented in soybean bran showed constant values at all pHs studied, for optimal and stability, but low production. Phytase with both substrates showed stable activity for temperatures higher than 80oC. Protease from A. niger showed very distinct behavior of optimum pH, acid for wheat bran and basic for soybean bran, respectively and optimal values of temperature and stability at 50oC. Phytase produced by A. oryzae in wheat bran had optimum pH and temperature of 9 and 37oC, respectively, but it was very unstable. On the other hand, proteases were stable at high temperatures, all pH’s studied and showed very high yield when fermented in wheat bran, however when it was fermented in soybean bran the production was very low. Subsequently the upscale production of phytase from A. niger and proteases from A. oryzae were applied as an enzyme additive in fish fed for digestibility studies. Phytases and proteases were produced with stable enzyme activity of 7,000 U.g-1 and 2,500 U.g-1, respectively. When those enzymes were applied in a plant protein based fish diet for digestibility studies, they increased protein, mineral, energy and lipids availability, showing that these new enzymes can improve animal production and performance. In conclusion, the substrate, as well as, the microorganism species can affect the biochemical character of the enzyme produced. Moreover, the production of these enzymes by SSF can be up to 90% cheaper than commercial ones produced with the same fungi species but submerged fermentation. Add to that these cheap enzymes can be easily applied as animal diet additives to improve production and performance.Keywords: agricultural by-products, animal nutrition, enzymes production, solid state fermentation
Procedia PDF Downloads 326723 Customized Temperature Sensors for Sustainable Home Appliances
Authors: Merve Yünlü, Nihat Kandemir, Aylin Ersoy
Abstract:
Temperature sensors are used in home appliances not only to monitor the basic functions of the machine but also to minimize energy consumption and ensure safe operation. In parallel with the development of smart home applications and IoT algorithms, these sensors produce important data such as the frequency of use of the machine, user preferences, and the compilation of critical data in terms of diagnostic processes for fault detection throughout an appliance's operational lifespan. Commercially available thin-film resistive temperature sensors have a well-established manufacturing procedure that allows them to operate over a wide temperature range. However, these sensors are over-designed for white goods applications. The operating temperature range of these sensors is between -70°C and 850°C, while the temperature range requirement in home appliance applications is between 23°C and 500°C. To ensure the operation of commercial sensors in this wide temperature range, usually, a platinum coating of approximately 1-micron thickness is applied to the wafer. However, the use of platinum in coating and the high coating thickness extends the sensor production process time and therefore increases sensor costs. In this study, an attempt was made to develop a low-cost temperature sensor design and production method that meets the technical requirements of white goods applications. For this purpose, a custom design was made, and design parameters (length, width, trim points, and thin film deposition thickness) were optimized by using statistical methods to achieve the desired resistivity value. To develop thin film resistive temperature sensors, one side polished sapphire wafer was used. To enhance adhesion and insulation 100 nm silicon dioxide was coated by inductively coupled plasma chemical vapor deposition technique. The lithography process was performed by a direct laser writer. The lift-off process was performed after the e-beam evaporation of 10 nm titanium and 280 nm platinum layers. Standard four-point probe sheet resistance measurements were done at room temperature. The annealing process was performed. Resistivity measurements were done with a probe station before and after annealing at 600°C by using a rapid thermal processing machine. Temperature dependence between 25-300 °C was also tested. As a result of this study, a temperature sensor has been developed that has a lower coating thickness than commercial sensors but can produce reliable data in the white goods application temperature range. A relatively simplified but optimized production method has also been developed to produce this sensor.Keywords: thin film resistive sensor, temperature sensor, household appliance, sustainability, energy efficiency
Procedia PDF Downloads 73722 Extrudable Foamed Concrete: General Benefits in Prefabrication and Comparison in Terms of Fresh Properties and Compressive Strength with Classic Foamed Concrete
Authors: D. Falliano, G. Ricciardi, E. Gugliandolo
Abstract:
Foamed concrete belongs to the category of lightweight concrete. It is characterized by a density which is generally ranging from 200 to 2000 kg/m³ and typically comprises cement, water, preformed foam, fine sand and eventually fine particles such as fly ash or silica fume. The foam component mixed with the cement paste give rise to the development of a system of air-voids in the cementitious matrix. The peculiar characteristics of foamed concrete elements are summarized in the following aspects: 1) lightness which allows reducing the dimensions of the resisting frame structure and is advantageous in the scope of refurbishment or seismic retrofitting in seismically vulnerable areas; 2) thermal insulating properties, especially in the case of low densities; 3) the good resistance against fire as compared to ordinary concrete; 4) the improved workability; 5) cost-effectiveness due to the usage of rather simple constituting elements that are easily available locally. Classic foamed concrete cannot be extruded, as the dimensional stability is not permitted in the green state and this severely limits the possibility of industrializing them through a simple and cost-effective process, characterized by flexibility and high production capacity. In fact, viscosity enhancing agents (VEA) used to extrude traditional concrete, in the case of foamed concrete cause the collapsing of air bubbles, so that it is impossible to extrude a lightweight product. These requirements have suggested the study of a particular additive that modifies the rheology of foamed concrete fresh paste by increasing cohesion and viscosity and, at the same time, stabilizes the bubbles into the cementitious matrix, in order to allow the dimensional stability in the green state and, consequently, the extrusion of a lightweight product. There are plans to submit the additive’s formulation to patent. In addition to the general benefits of using the extrusion process, extrudable foamed concrete allow other limits to be exceeded: elimination of formworks, expanded application spectrum, due to the possibility of extrusion in a range varying between 200 and 2000 kg/m³, which allows the prefabrication of both structural and non-structural constructive elements. Besides, this contribution aims to present the significant differences regarding extrudable and classic foamed concrete fresh properties in terms of slump. Plastic air content, plastic density, hardened density and compressive strength have been also evaluated. The outcomes show that there are no substantial differences between extrudable and classic foamed concrete compression resistances.Keywords: compressive strength, extrusion, foamed concrete, fresh properties, plastic air content, slump.
Procedia PDF Downloads 176721 An Inquiry of the Impact of Flood Risk on Housing Market with Enhanced Geographically Weighted Regression
Authors: Lin-Han Chiang Hsieh, Hsiao-Yi Lin
Abstract:
This study aims to determine the impact of the disclosure of flood potential map on housing prices. The disclosure is supposed to mitigate the market failure by reducing information asymmetry. On the other hand, opponents argue that the official disclosure of simulated results will only create unnecessary disturbances on the housing market. This study identifies the impact of the disclosure of the flood potential map by comparing the hedonic price of flood potential before and after the disclosure. The flood potential map used in this study is published by Taipei municipal government in 2015, which is a result of a comprehensive simulation based on geographical, hydrological, and meteorological factors. The residential property sales data of 2013 to 2016 is used in this study, which is collected from the actual sales price registration system by the Department of Land Administration (DLA). The result shows that the impact of flood potential on residential real estate market is statistically significant both before and after the disclosure. But the trend is clearer after the disclosure, suggesting that the disclosure does have an impact on the market. Also, the result shows that the impact of flood potential differs by the severity and frequency of precipitation. The negative impact for a relatively mild, high frequency flood potential is stronger than that for a heavy, low possibility flood potential. The result indicates that home buyers are of more concern to the frequency, than the intensity of flood. Another contribution of this study is in the methodological perspective. The classic hedonic price analysis with OLS regression suffers from two spatial problems: the endogeneity problem caused by omitted spatial-related variables, and the heterogeneity concern to the presumption that regression coefficients are spatially constant. These two problems are seldom considered in a single model. This study tries to deal with the endogeneity and heterogeneity problem together by combining the spatial fixed-effect model and geographically weighted regression (GWR). A series of literature indicates that the hedonic price of certain environmental assets varies spatially by applying GWR. Since the endogeneity problem is usually not considered in typical GWR models, it is arguable that the omitted spatial-related variables might bias the result of GWR models. By combing the spatial fixed-effect model and GWR, this study concludes that the effect of flood potential map is highly sensitive by location, even after controlling for the spatial autocorrelation at the same time. The main policy application of this result is that it is improper to determine the potential benefit of flood prevention policy by simply multiplying the hedonic price of flood risk by the number of houses. The effect of flood prevention might vary dramatically by location.Keywords: flood potential, hedonic price analysis, endogeneity, heterogeneity, geographically-weighted regression
Procedia PDF Downloads 290720 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 231719 The Application of Raman Spectroscopy in Olive Oil Analysis
Authors: Silvia Portarena, Chiara Anselmi, Chiara Baldacchini, Enrico Brugnoli
Abstract:
Extra virgin olive oil (EVOO) is a complex matrix mainly composed by fatty acid and other minor compounds, among which carotenoids are well known for their antioxidative function that is a key mechanism of protection against cancer, cardiovascular diseases, and macular degeneration in humans. EVOO composition in terms of such constituents is generally the result of a complex combination of genetic, agronomical and environmental factors. To selectively improve the quality of EVOOs, the role of each factor on its biochemical composition need to be investigated. By selecting fruits from four different cultivars similarly grown and harvested, it was demonstrated that Raman spectroscopy, combined with chemometric analysis, is able to discriminate the different cultivars, also as a function of the harvest date, based on the relative content and composition of fatty acid and carotenoids. In particular, a correct classification up to 94.4% of samples, according to the cultivar and the maturation stage, was obtained. Moreover, by using gas chromatography and high-performance liquid chromatography as reference techniques, the Raman spectral features further allowed to build models, based on partial least squares regression, that were able to predict the relative amount of the main fatty acids and the main carotenoids in EVOO, with high coefficients of determination. Besides genetic factors, climatic parameters, such as light exposition, distance from the sea, temperature, and amount of precipitations could have a strong influence on EVOO composition of both major and minor compounds. This suggests that the Raman spectra could act as a specific fingerprint for the geographical discrimination and authentication of EVOO. To understand the influence of environment on EVOO Raman spectra, samples from seven regions along the Italian coasts were selected and analyzed. In particular, it was used a dual approach combining Raman spectroscopy and isotope ratio mass spectrometry (IRMS) with principal component and linear discriminant analysis. A correct classification of 82% EVOO based on their regional geographical origin was obtained. Raman spectra were obtained by Super Labram spectrometer equipped with an Argon laser (514.5 nm wavelenght). Analyses of stable isotope content ratio were performed using an isotope ratio mass spectrometer connected to an elemental analyzer and to a pyrolysis system. These studies demonstrate that RR spectroscopy is a valuable and useful technique for the analysis of EVOO. In combination with statistical analysis, it makes possible the assessment of specific samples’ content and allows for classifying oils according to their geographical and varietal origin.Keywords: authentication, chemometrics, olive oil, raman spectroscopy
Procedia PDF Downloads 332718 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 158