Search results for: municipal application
272 Traditional Practices of Conserving Biodiversity: A Case Study around Jim Corbett National Park, Uttarakhand, India
Authors: Rana Parween, Rob Marchant
Abstract:
With the continued loss of global biodiversity despite the application of modern conservation techniques, it has become crucial to investigate non-conventional methods. Accelerated destruction of ecosystems due to altered land use, climate change, cultural and social change, necessitates the exploration of society-biodiversity attitudes and links. While the loss of species and their extinction is a well-known and well-documented process that attracts much-needed attention from researchers, academics, government and non-governmental organizations, the loss of traditional ecological knowledge and practices is more insidious and goes unnoticed. The growing availability of 'indirect experiences' such as the internet and media are leading to a disaffection towards nature and the 'Extinction of Experience'. Exacerbated by the lack of documentation of traditional practices and skills, there is the possibility for the 'extinction' of traditional practices and skills before they are fully recognized and captured. India, as a mega-biodiverse country, is also known for its historical conservation strategies entwined in traditional beliefs. Indigenous communities hold skillsets, knowledge, and traditions that have accumulated over multiple generations and may play an important role in conserving biodiversity today. This study explores the differences in knowledge and attitudes towards conserving biodiversity, of three different stakeholder groups living around Jim Corbett National Park, based on their age, traditions, and association with the protected area. A triangulation designed multi-strategy investigation collected qualitative and quantitative data through a questionnaire survey of village elders, the general public, and forest officers. Following an inductive approach to analyzing qualitative data, the thematic content analysis was followed. All coding and analysis were completed using NVivo 11. Although the village elders and some general public had vast amounts of traditional knowledge, most of it was related to animal husbandry and the medicinal value of plants. Village elders were unfamiliar with the concept of the term ‘biodiversity’ albeit their way of life and attitudes ensured that they care for the ecosystem without having the scientific basis underpinning biodiversity conservation. Inherently, village elders were keen to conserve nature; the superimposition of governmental policies without any tangible benefit or consultation was seen as detrimental. Alienating villagers and consequently the village elders who are the reservoirs of traditional knowledge would not only be damaging to the social network of the area but would also disdain years of tried and tested techniques held by the elders. Forest officers advocated for biodiversity and conservation education for women and children. Women, across all groups, when questioned about nature conservation, showed more interest in learning and participation. Biodiversity not only has an ethical and cultural value, but also plays a role in ecosystem function and, thus, provides ecosystem services and supports livelihoods. Therefore, underpinning and using traditional knowledge and incorporating them into programs of biodiversity conservation should be explored with a sense of urgency.Keywords: biological diversity, mega-biodiverse countries, traditional ecological knowledge, society-biodiversity links
Procedia PDF Downloads 106271 Positioning Mama Mkubwa Indigenous Model into Social Work Practice through Alternative Child Care in Tanzania: Ubuntu Perspective
Authors: Johnas Buhori, Meinrad Haule Lembuka
Abstract:
Introduction: Social work expands its boundary to accommodate indigenous knowledge and practice for better competence and services. In Tanzania, Mama Mkubwa Mkubwa (MMM) (Mother’s elder sister) is an indigenous practice of alternative child care that represents other traditional practices across African societies known as Ubuntu practice. Ubuntu is African Humanism with values and approaches that are connected to the social work. MMM focuses on using the elder sister of a deceased mother or father, a trusted elder woman from the extended family or indigenous community to provide alternative care to an orphan or vulnerable child. In Ubuntu's perspective, it takes a whole village or community to raise a child, meaning that every person in the community is responsible for child care. Methodology: A desk review method guided by Ubuntu theory was applied to enrich the study. Findings: MMM resembles the Ubuntu ideal of traditional child protection of those in need as part of alternative child care throughout Tanzanian history. Social work practice, along with other formal alternative child care, was introduced in Tanzania during the colonial era in 1940s and socio-economic problems of 1980s affected the country’s formal social welfare system, and suddenly HIV/AIDS pandemic triggered the vulnerability of children and hampered the capacity of the formal sector to provide social welfare services, including alternative child care. For decades, AIDS has contributed to an influx of orphans and vulnerable children that facilitated the re-emerging of traditional alternative child care at the community level, including MMM. MMM strongly practiced in regions where the AIDS pandemic affected the community, like Njombe, Coastal region, Kagera, etc. Despite of existing challenges, MMM remained to be the remarkably alternative child care practiced in both rural and urban communities integrated with social welfare services. Tanzania envisions a traditional mechanism of family or community environment for alternative child care with the notion that sometimes institutionalization care fails to offer children all they need to become productive members of society, and later, it becomes difficult to reconnect in the society. Implications to Social Work: MMM is compatible with social work by using strengths perspectives; MMM reflects Ubuntu's perspective on the ground of humane social work, using humane methods to achieve human goals. MMM further demonstrates the connectedness of those who care and those cared for and the inextricable link between them as Ubuntu-inspired models of social work that view children from family, community, environmental, and spiritual perspectives. Conclusion: Social work and MMM are compatible at the micro and mezzo levels; thus, application of MMM can be applied in social work practice beyond Tanzania when properly designed and integrated into other systems. When MMM is applied in social work, alternative care has the potential to support not only children but also empower families and communities. Since MMM is a community-owned and voluntary base, it can relieve the government, social workers, and other formal sectors from the annual burden of cost in the provision of institutionalized alternative child care.Keywords: ubuntu, indigenous social work, african social work, ubuntu social work, child protection, child alternative care
Procedia PDF Downloads 67270 Portable Environmental Parameter Monitor Based on STM32
Authors: Liang Zhao, Chongquan Zhong
Abstract:
Introduction: According to statistics, people spend 80% to 90% of time indoor, so indoor air quality, either at home or in the office, greatly impacts the quality of life, health and work efficiency. Therefore, indoor air quality is very important to human activities. With the acceleration of urbanization, people are spending more time in indoor activity. The time in indoor environment, the living space, and the frequency interior decoration are all increasingly increased. However, housing decoration materials contain formaldehyde and other harmful substances, causing environmental and air quality problems, which have brought serious damage to countless families and attracted growing attention. According to World Health Organization statistics, the indoor environments in more than 30% of buildings in China are polluted by poisonous and harmful gases. Indoor pollution has caused various health problems, and these widespread public health problems can lead to respiratory diseases. Long-term inhalation of low-concentration formaldehyde would cause persistent headache, insomnia, weakness, palpitation, weight loss and vomiting, which are serious impacts on human health and safety. On the other hand, as for offices, some surveys show that good indoor air quality helps to enthuse the staff and improve the work efficiency by 2%-16%. Therefore, people need to further understand the living and working environments. There is a need for easy-to-use indoor environment monitoring instruments, with which users only have to power up and monitor the environmental parameters. The corresponding real-time data can be displayed on the screen for analysis. Environment monitoring should have the sensitive signal alarm function and send alarm when harmful gases such as formaldehyde, CO, SO2, are excessive to human body. System design: According to the monitoring requirements of various gases, temperature and humidity, we designed a portable, light, real-time and accurate monitor for various environmental parameters, including temperature, humidity, formaldehyde, methane, and CO. This monitor will generate an alarm signal when a target is beyond the standard. It can conveniently measure a variety of harmful gases and provide the alarm function. It also has the advantages of small volume, convenience to carry and use. It has a real-time display function, outputting the parameters on the LCD screen, and a real-time alarm function. Conclusions: This study is focused on the research and development of a portable parameter monitoring instrument for indoor environment. On the platform of an STM32 development board, the monitored data are collected through an external sensor. The STM32 platform is for data acquisition and processing procedures, and successfully monitors the real-time temperature, humidity, formaldehyde, CO, methane and other environmental parameters. Real-time data are displayed on the LCD screen. The system is stable and can be used in different indoor places such as family, hospital, and office. Meanwhile, the system adopts the idea of modular design and is superior in transplanting. The scheme is slightly modified and can be used similarly as the function of a monitoring system. This monitor has very high research and application values.Keywords: indoor air quality, gas concentration detection, embedded system, sensor
Procedia PDF Downloads 255269 Multi-scale Geographic Object-Based Image Analysis (GEOBIA) Approach to Segment a Very High Resolution Images for Extraction of New Degraded Zones. Application to The Region of Mécheria in The South-West of Algeria
Authors: Bensaid A., Mostephaoui T., Nedjai R.
Abstract:
A considerable area of Algerian lands are threatened by the phenomenon of wind erosion. For a long time, wind erosion and its associated harmful effects on the natural environment have posed a serious threat, especially in the arid regions of the country. In recent years, as a result of increases in the irrational exploitation of natural resources (fodder) and extensive land clearing, wind erosion has particularly accentuated. The extent of degradation in the arid region of the Algerian Mécheriadepartment generated a new situation characterized by the reduction of vegetation cover, the decrease of land productivity, as well as sand encroachment on urban development zones. In this study, we attempt to investigate the potential of remote sensing and geographic information systems for detecting the spatial dynamics of the ancient dune cords based on the numerical processing of PlanetScope PSB.SB sensors images by September 29, 2021. As a second step, we prospect the use of a multi-scale geographic object-based image analysis (GEOBIA) approach to segment the high spatial resolution images acquired on heterogeneous surfaces that vary according to human influence on the environment. We have used the fractal net evolution approach (FNEA) algorithm to segment images (Baatz&Schäpe, 2000). Multispectral data, a digital terrain model layer, ground truth data, a normalized difference vegetation index (NDVI) layer, and a first-order texture (entropy) layer were used to segment the multispectral images at three segmentation scales, with an emphasis on accurately delineating the boundaries and components of the sand accumulation areas (Dune, dunes fields, nebka, and barkhane). It is important to note that each auxiliary data contributed to improve the segmentation at different scales. The silted areas were classified using a nearest neighbor approach over the Naâma area using imagery. The classification of silted areas was successfully achieved over all study areas with an accuracy greater than 85%, although the results suggest that, overall, a higher degree of landscape heterogeneity may have a negative effect on segmentation and classification. Some areas suffered from the greatest over-segmentation and lowest mapping accuracy (Kappa: 0.79), which was partially attributed to confounding a greater proportion of mixed siltation classes from both sandy areas and bare ground patches. This research has demonstrated a technique based on very high-resolution images for mapping sanded and degraded areas using GEOBIA, which can be applied to the study of other lands in the steppe areas of the northern countries of the African continent.Keywords: land development, GIS, sand dunes, segmentation, remote sensing
Procedia PDF Downloads 109268 Adapting Inclusive Residential Models to Match Universal Accessibility and Fire Protection
Authors: Patricia Huedo, Maria José Ruá, Raquel Agost-Felip
Abstract:
Ensuring sustainable development of urban environments means guaranteeing adequate environmental conditions, being resilient and meeting conditions of safety and inclusion for all people, regardless of their condition. All existing buildings should meet basic safety conditions and be equipped with safe and accessible routes, along with visual, acoustic and tactile signals to protect their users or potential visitors, and regardless of whether they undergo rehabilitation or change of use processes. Moreover, from a social perspective, we consider the need to prioritize buildings occupied by the most vulnerable groups of people that currently do not have specific regulations tailored to their needs. Some residential models in operation are not only outside the scope of application of the regulations in force; they also lack a project or technical data that would allow knowing the fire behavior of the construction materials. However, the difficulty and cost involved in adapting the entire building stock to current regulations can never justify the lack of safety for people. Hence, this work develops a simplified model to assess compliance with the basic safety conditions in case of fire and its compatibility with the specific accessibility needs of each user. The purpose is to support the designer in decision making, as well as to contribute to the development of a basic fire safety certification tool to be applied in inclusive residential models. This work has developed a methodology to support designers in adapting Social Services Centers, usually intended to vulnerable people. It incorporates a checklist of 9 items and information from sources or standards that designers can use to justify compliance or propose solutions. For each item, the verification system is justified, and possible sources of consultation are provided, considering the possibility of lacking technical documentation of construction systems or building materials. The procedure is based on diagnosing the degree of compliance with fire conditions of residential models used by vulnerable groups, considering the special accessibility conditions required by each user group. Through visual inspection and site surveying, the verification model can serve as a support tool, significantly streamlining the diagnostic phase and reducing the number of tests to be requested by over 75%. This speeds up and simplifies the diagnostic phase. To illustrate the methodology, two different buildings in the Valencian Region (Spain) have been selected. One case study is a mental health facility for residential purposes, located in a rural area, on the outskirts of a small town; the other one, is a day care facility for individuals with intellectual disabilities, located in a medium-sized city. The comparison between the case studies allow to validate the model in distinct conditions. Verifying compliance with a basic security level can allow a quality seal and a public register of buildings adapted to fire regulations to be established, similarly to what is being done with other types of attributes such as energy performance.Keywords: fire safety, inclusive housing, universal accessibility, vulnerable people
Procedia PDF Downloads 22267 A Rural Journey of Integrating Interprofessional Education to Foster Trust
Authors: Julia Wimmers Klick
Abstract:
Interprofessional Education (IPE) is widely recognized as a valuable approach in healthcare education, despite the challenges it presents. This study explores IP surface anatomy lab sessions, with a focus on fostering trust and collaboration among healthcare students. The research is conducted within the context of rural healthcare settings in British Columbia (BC), where a medical school and a physical therapy (PT) program operate under the Faculty of Medicine at the University of British Columbia (UBC). While IPE sessions addressing soft skills have been implemented, the integration of hard skills, such as Anatomy, remains limited. To address this gap, a pilot feasibility study was conducted with a positive outcome, a follow-up study involved these IPE sessions aimed at exploring the influence of bonding and trust between medical and PT students. Data were collected through focus groups comprising participating students and faculty members, and a structured SWOC (Strengths, Weaknesses, Opportunities, and Challenges) analysis was conducted. The IPE sessions, 3 in total, consisted of a 2.5-hour lab on surface anatomy, where PT students took on the teaching role, and medical students were newly exposed to surface anatomy. The focus of the study was on the relationship-building process and trust development between the two student groups, rather than assessing the acquisition of surface anatomy skills. Results indicated that the surface anatomy lab served as a suitable tool for the application and learning of soft skills. Faculty members observed positive outcomes, including productive interaction between students, reversed hierarchy with PT students teaching medical students, practicing active listening skills, and using a mutual language of anatomy. Notably, there was no grade assessment or external pressure to perform. The students also reported an overall positive experience; however, the specific impact on the development of soft skill competencies could not be definitively determined. Participants expressed a sense of feeling respected, welcomed, and included, all of which contributed to feeling safe. Within the small group environment, students experienced becoming a part of a community of healthcare providers that bonded over a shared interest in health professions education. They enjoyed sharing diverse experiences related to learning across their varied contexts, without fear of judgment and reprisal that were often intimidating in single professional contexts. During a joint Christmas party for both cohorts, faculty members observed students mingling, laughing, and forming bonds. This emphasized the importance of early bonding and trust development among healthcare colleagues, particularly in rural settings. In conclusion, the findings emphasize the potential of IPE sessions to enhance trust and collaboration among healthcare students, with implications for their future professional lives in rural settings. Early bonding and trust development are crucial in rural settings, where healthcare professionals often rely on each other. Future research should continue to explore the impact of content-concentrated IPE on the development of soft skill competencies.Keywords: interprofessional education, rural healthcare settings, trust, surface anatomy
Procedia PDF Downloads 69266 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 144265 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 290264 Use of Sewage Sludge Ash as Partial Cement Replacement in the Production of Mortars
Authors: Domagoj Nakic, Drazen Vouk, Nina Stirmer, Mario Siljeg, Ana Baricevic
Abstract:
Wastewater treatment processes generate significant quantities of sewage sludge that need to be adequately treated and disposed. In many EU countries, the problem of adequate disposal of sewage sludge has not been solved, nor is determined by the unique rules, instructions or guidelines. Disposal of sewage sludge is important not only in terms of satisfying the regulations, but the aspect of choosing the optimal wastewater and sludge treatment technology. Among the solutions that seem reasonable, recycling of sewage sludge and its byproducts reaches the top recommendation. Within the framework of sustainable development, recycling of sludge almost completely closes the cycle of wastewater treatment in which only negligible amounts of waste that requires landfilling are being generated. In many EU countries, significant amounts of sewage sludge are incinerated, resulting in a new byproduct in the form of ash. Sewage sludge ash is three to five times less in volume compared to stabilized and dehydrated sludge, but it also requires further management. The combustion process also destroys hazardous organic components in the sludge and minimizes unpleasant odors. The basic objective of the presented research is to explore the possibilities of recycling of the sewage sludge ash as a supplementary cementitious material. This is because of the main oxides present in the sewage sludge ash (SiO2, Al2O3 and Cao, which is similar to cement), so it can be considered as latent hydraulic and pozzolanic material. Physical and chemical characteristics of ashes, generated by sludge collected from different wastewater treatment plants, and incinerated in laboratory conditions at different temperatures, are investigated since it is a prerequisite of its subsequent recycling and the eventual use in other industries. Research was carried out by replacing up to 20% of cement by mass in cement mortar mixes with different obtained ashes and examining characteristics of created mixes in fresh and hardened condition. The mixtures with the highest ash content (20%) showed an average drop in workability of about 15% which is attributed to the increased water requirements when ash was used. Although some mixes containing added ash showed compressive and flexural strengths equivalent to those of reference mixes, generally slight decrease in strength was observed. However, it is important to point out that the compressive strengths always remained above 85% compared to the reference mix, while flexural strengths remained above 75%. Ecological impact of innovative construction products containing sewage sludge ash was determined by analyzing leaching concentrations of heavy metals. Results demonstrate that sewage sludge ash can satisfy technical and environmental criteria for use in cementitious materials which represents a new recycling application for an increasingly important waste material that is normally landfilled. Particular emphasis is placed on linking the composition of generated ashes depending on its origin and applied treatment processes (stage of wastewater treatment, sludge treatment technology, incineration temperature) with the characteristics of the final products. Acknowledgement: This work has been fully supported by Croatian Science Foundation under the project '7927 - Reuse of sewage sludge in concrete industry – from infrastructure to innovative construction products'.Keywords: cement mortar, recycling, sewage sludge ash, sludge disposal
Procedia PDF Downloads 247263 Recurrent Neural Networks for Classifying Outliers in Electronic Health Record Clinical Text
Authors: Duncan Wallace, M-Tahar Kechadi
Abstract:
In recent years, Machine Learning (ML) approaches have been successfully applied to an analysis of patient symptom data in the context of disease diagnosis, at least where such data is well codified. However, much of the data present in Electronic Health Records (EHR) are unlikely to prove suitable for classic ML approaches. Furthermore, as scores of data are widely spread across both hospitals and individuals, a decentralized, computationally scalable methodology is a priority. The focus of this paper is to develop a method to predict outliers in an out-of-hours healthcare provision center (OOHC). In particular, our research is based upon the early identification of patients who have underlying conditions which will cause them to repeatedly require medical attention. OOHC act as an ad-hoc delivery of triage and treatment, where interactions occur without recourse to a full medical history of the patient in question. Medical histories, relating to patients contacting an OOHC, may reside in several distinct EHR systems in multiple hospitals or surgeries, which are unavailable to the OOHC in question. As such, although a local solution is optimal for this problem, it follows that the data under investigation is incomplete, heterogeneous, and comprised mostly of noisy textual notes compiled during routine OOHC activities. Through the use of Deep Learning methodologies, the aim of this paper is to provide the means to identify patient cases, upon initial contact, which are likely to relate to such outliers. To this end, we compare the performance of Long Short-Term Memory, Gated Recurrent Units, and combinations of both with Convolutional Neural Networks. A further aim of this paper is to elucidate the discovery of such outliers by examining the exact terms which provide a strong indication of positive and negative case entries. While free-text is the principal data extracted from EHRs for classification, EHRs also contain normalized features. Although the specific demographical features treated within our corpus are relatively limited in scope, we examine whether it is beneficial to include such features among the inputs to our neural network, or whether these features are more successfully exploited in conjunction with a different form of a classifier. In this section, we compare the performance of randomly generated regression trees and support vector machines and determine the extent to which our classification program can be improved upon by using either of these machine learning approaches in conjunction with the output of our Recurrent Neural Network application. The output of our neural network is also used to help determine the most significant lexemes present within the corpus for determining high-risk patients. By combining the confidence of our classification program in relation to lexemes within true positive and true negative cases, with an inverse document frequency of the lexemes related to these cases, we can determine what features act as the primary indicators of frequent-attender and non-frequent-attender cases, providing a human interpretable appreciation of how our program classifies cases.Keywords: artificial neural networks, data-mining, machine learning, medical informatics
Procedia PDF Downloads 131262 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles
Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska
Abstract:
In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2
Procedia PDF Downloads 267261 Nanoparticle Supported, Magnetically Separable Metalloporphyrin as an Efficient Retrievable Heterogeneous Nanocatalyst in Oxidation Reactions
Authors: Anahita Mortazavi Manesh, Mojtaba Bagherzadeh
Abstract:
Metalloporphyrins are well known to mimic the activity of monooxygenase enzymes. In this regard, metalloporphyrin complexes have been largely employed as valuable biomimetic catalysts, owing to the critical roles they play in oxygen transfer processes in catalytic oxidation reactions. Investigating in this area is based on different strategies to design selective, stable and high turnover catalytic systems. Immobilization of expensive metalloporphyrin catalysts onto supports appears to be a good way to improve their stability, selectivity and the catalytic performance because of the support environment and other advantages with respect to recovery, reuse. In other words, supporting metalloporphyrins provides a physical separation of active sites, thus minimizing catalyst self-destruction and dimerization of unhindered metalloporphyrins. Furthermore, heterogeneous catalytic oxidations have become an important target since their process are used in industry, helping to minimize the problems of industrial waste treatment. Hence, the immobilization of these biomimetic catalysts is much desired. An attractive approach is the preparation of the heterogeneous catalyst involves immobilization of complexes on silica coated magnetic nano-particles. Fe3O4@SiO2 magnetic nanoparticles have been studied extensively due to their superparamagnetism property, large surface area to volume ratio and easy functionalization. Using heterogenized homogeneous catalysts is an attractive option to facile separation of catalyst, simplified product work-up and continuity of catalytic system. Homogeneous catalysts immobilized on magnetic nanoparticles (MNPs) surface occupy a unique position due to combining the advantages of both homogeneous and heterogeneous catalysts. In addition, superparamagnetic nature of MNPs enable very simple separation of the immobilized catalysts from the reaction mixture using an external magnet. In the present work, an efficient heterogeneous catalyst was prepared by immobilizing manganese porphyrin on functionalized magnetic nanoparticles through the amino propyl linkage. The prepared catalyst was characterized by elemental analysis, FT-IR spectroscopy, X-ray powder diffraction, atomic absorption spectroscopy, UV-Vis spectroscopy, and scanning electron microscopy. Application of immobilized metalloporphyrin in the oxidation of various organic substrates was explored using Gas chromatographic (GC) analyses. The results showed that the supported Mn-porphyrin catalyst (Fe3O4@SiO2-NH2@MnPor) is an efficient and reusable catalyst in oxidation reactions. Our catalytic system exhibits high catalytic activity in terms of turnover number (TON) and reaction conditions. Leaching and recycling experiments revealed that nanocatalyst can be recovered several times without loss of activity and magnetic properties. The most important advantage of this heterogenized catalytic system is the simplicity of the catalyst separation in which the catalyst can be separated from the reaction mixture by applying a magnet. Furthermore, the separation and reuse of the magnetic Fe3O4 nanoparticles were very effective and economical.Keywords: Fe3O4 nanoparticle, immobilized metalloporphyrin, magnetically separable nanocatalyst, oxidation reactions
Procedia PDF Downloads 300260 Production of Medicinal Bio-active Amino Acid Gamma-Aminobutyric Acid In Dairy Sludge Medium
Authors: Farideh Tabatabaee Yazdi, Fereshteh Falah, Alireza Vasiee
Abstract:
Introduction: Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is widely present in organisms. GABA is a kind of pharmacological and biological component and its application is wide and useful. Several important physiological functions of GABA have been characterized, such as neurotransmission and induction of hypotension. GABA is also a strong secretagogue of insulin from the pancreas and effectively inhibits small airway-derived lung adenocarcinoma and tranquilizer. Many microorganisms can produce GABA, and lactic acid bacteria have been a focus of research in recent years because lactic acid bacteria possess special physiological activities and are generally regarded as safe. Among them, the Lb. Brevis produced the highest amount of GABA. The major factors affecting GABA production have been characterized, including carbon sources and glutamate concentration. The use of food industry waste to produce valuable products such as amino acids seems to be a good way to reduce production costs and prevent the waste of food resources. In a dairy factory, a high volume of sludge is produced from a separator that contains useful compounds such as growth factors, carbon, nitrogen, and organic matter that can be used by different microorganisms such as Lb.brevis as carbon and nitrogen sources. Therefore, it is a good source of GABA production. GABA is primarily formed by the irreversible α-decarboxylation reaction of L-glutamic acid or its salts, catalysed by the GAD enzyme. In the present study, this aim was achieved for the fast-growing of Lb.brevis and producing GABA, using the dairy industry sludge as a suitable growth medium. Lactobacillus Brevis strains obtained from Microbial Type Culture Collection (MTCC) were used as model strains. In order to prepare dairy sludge as a medium, sterilization should be done at 121 ° C for 15 minutes. Lb. Brevis was inoculated to the sludge media at pH=6 and incubated for 120 hours at 30 ° C. After fermentation, the supernatant solution is centrifuged and then, the GABA produced was analyzed by the Thin Layer chromatography (TLC) method qualitatively and by the high-performance liquid chromatography (HPLC) method quantitatively. By increasing the percentage of dairy sludge in the culture medium, the amount of GABA increased. Also, evaluated the growth of bacteria in this medium showed the positive effect of dairy sludge on the growth of Lb.brevis, which resulted in the production of more GABA. GABA-producing LAB offers the opportunity of developing naturally fermented health-oriented products. Although some GABA-producing LAB has been isolated to find strains suitable for different fermentations, further screening of various GABA-producing strains from LAB, especially high-yielding strains, is necessary. The production of lactic acid, bacterial gamma-aminobutyric acid, is safe and eco-friendly. The use of dairy industry waste causes enhanced environmental safety. Also provides the possibility of producing valuable compounds such as GABA. In general, dairy sludge is a suitable medium for the growth of Lactic Acid Bacteria and produce this amino acid that can reduce the final cost of it by providing carbon and nitrogen source.Keywords: GABA, Lactobacillus, HPLC, dairy sludge
Procedia PDF Downloads 144259 Dynamic Thermomechanical Behavior of Adhesively Bonded Composite Joints
Authors: Sonia Sassi, Mostapha Tarfaoui, Hamza Benyahia
Abstract:
Composite materials are increasingly being used as a substitute for metallic materials in many technological applications like aeronautics, aerospace, marine and civil engineering applications. For composite materials, the thermomechanical response evolves with the strain rate. The energy balance equation for anisotropic, elastic materials includes heat source terms that govern the conversion of some of the kinetic work into heat. The remainder contributes to the stored energy creating the damage process in the composite material. In this paper, we investigate the bulk thermomechanical behavior of adhesively-bonded composite assemblies to quantitatively asses the temperature rise which accompanies adiabatic deformations. In particular, adhesively bonded joints in glass/vinylester composite material are subjected to in-plane dynamic loads under a range of strain rates. Dynamic thermomechanical behavior of this material is investigated using compression Split Hopkinson Pressure Bars (SHPB) coupled with a high speed infrared camera and a high speed camera to measure in real time the dynamic behavior, the damage kinetic and the temperature variation in the material. The interest of using high speed IR camera is in order to view in real time the evolution of heat dissipation in the material when damage occurs. But, this technique does not produce thermal values in correlation with the stress-strain curves of composite material because of its high time response in comparison with the dynamic test time. For this reason, the authors revisit the application of specific thermocouples placed on the surface of the material to ensure the real thermal measurements under dynamic loading using small thermocouples. Experiments with dynamically loaded material show that the thermocouples record temperatures values with a short typical rise time as a result of the conversion of kinetic work into heat during compression test. This results show that small thermocouples can be used to provide an important complement to other noncontact techniques such as the high speed infrared camera. Significant temperature rise was observed in in-plane compression tests especially under high strain rates. During the tests, it has been noticed that sudden temperature rise occur when macroscopic damage occur. This rise in temperature is linked to the rate of damage. The more serve the damage is, a higher localized temperature is detected. This shows the strong relationship between the occurrence of damage and induced heat dissipation. For the case of the in plane tests, the damage takes place more abruptly as the strain rate is increased. The difference observed in the obtained thermomechanical response in plane compression is explained only by the difference in the damage process being active during the compression tests. In this study, we highlighted the dependence of the thermomechanical response on the strain rate of bonded specimens. The effect of heat dissipation of this material cannot hence be ignored and should be taken into account when defining damage models during impact loading.Keywords: adhesively-bonded composite joints, damage, dynamic compression tests, energy balance, heat dissipation, SHPB, thermomechanical behavior
Procedia PDF Downloads 213258 Collagen/Hydroxyapatite Compositions Doped with Transitional Metals for Bone Tissue Engineering Applications
Authors: D. Ficai, A. Ficai, D. Gudovan, I. A. Gudovan, I. Ardelean, R. Trusca, E. Andronescu, V. Mitran, A. Cimpean
Abstract:
In the last years, scientists struggled hardly to mimic bone structures to develop implants and biostructures which present higher biocompatibility and reduced rejection rate. One way to obtain this goal is to use similar materials as that of bone, namely collagen/hydroxyapatite composite materials. However, it is very important to tailor both compositions but also the microstructure of the bone that would ensure both the optimal osteointegartion and the mechanical properties required by the application. In this study, new collagen/hydroxyapatites composite materials doped with Cu, Li, Mn, Zn were successfully prepared. The synthesis method is described below: weight the Ca(OH)₂ mass, i.e., 7,3067g, and ZnCl₂ (0.134g), CuSO₄ (0.159g), LiCO₃ (0.133g), MnCl₂.4H₂O (0.1971g), and suspend in 100ml distilled water under magnetic stirring. The solution thus obtained is added a solution of NaH₂PO₄*H2O (8.247g dissolved in 50ml distilled water) under slow dropping of 1 ml/min followed by adjusting the pH to 9.5 with HCl and finally filter and wash until neutral pH. The as-obtained slurry was dried in the oven at 80°C and then calcined at 600°C in order to ensure a proper purification of the final product of organic phases, also inducing a proper sterilization of the mixture before insertion into the collagen matrix. The collagen/hydroxyapatite composite materials are tailored from morphological point of view to optimize their biocompatibility and bio-integration against mechanical properties whereas the addition of the dopants is aimed to improve the biological activity of the samples. The addition of transitional metals can improve the biocompatibility and especially the osteoblasts adhesion (Mn²⁺) or to induce slightly better osteoblast differentiation of the osteoblast, Zn²⁺ being a cofactor for many enzymes including those responsible for cell differentiation. If the amount is too high, the final material can become toxic and lose all of its biocompatibility. In order to achieve a good biocompatibility and not reach the cytotoxic effect, the amount of transitional metals added has to be maintained at low levels (0.5% molar). The amount of transitional metals entering into the elemental cell of HA will be verified using inductively-coupled plasma mass spectrometric system. This highly sensitive technique is necessary, because, at such low levels of transitional metals, the difference between biocompatible and cytotoxic is a very thin line, thus requiring proper and thorough investigation using a precise technique. In order to determine the structure and morphology of the obtained composite materials, IR spectroscopy, X-Ray diffraction (XRD), scanning electron microscopy (SEM), and Energy Dispersive X-Ray Spectrometry (EDS) were used. Acknowledgment: The present work was possible due to the EU-funding grant POSCCE-A2O2.2.1-2013-1, Project No. 638/12.03.2014, code SMIS-CSNR 48652. The financial contribution received from the national project “Biomimetic porous structures obtained by 3D printing developed for bone tissue engineering (BIOGRAFTPRINT), No. 127PED/2017 is also highly acknowledged.Keywords: collagen, composite materials, hydroxyapatite, bone tissue engineering
Procedia PDF Downloads 206257 Thulium Laser Design and Experimental Verification for NIR and MIR Nonlinear Applications in Specialty Optical Fibers
Authors: Matej Komanec, Tomas Nemecek, Dmytro Suslov, Petr Chvojka, Stanislav Zvanovec
Abstract:
Nonlinear phenomena in the near- and mid-infrared region are attracting scientific attention mainly due to the supercontinuum generation possibilities and subsequent utilizations for ultra-wideband applications like e.g. absorption spectroscopy or optical coherence tomography. Thulium-based fiber lasers provide access to high-power ultrashort pump pulses in the vicinity of 2000 nm, which can be easily exploited for various nonlinear applications. The paper presents a simulation and experimental study of a pulsed thulium laser based for near-infrared (NIR) and mid-infrared (MIR) nonlinear applications in specialty optical fibers. In the first part of the paper the thulium laser is discussed. The thulium laser is based on a gain-switched seed-laser and a series of amplification stages for obtaining output peak powers in the order of kilowatts for pulses shorter than 200 ps in full-width at half-maximum. The pulsed thulium laser is first studied in a simulation software, focusing on seed-laser properties. Afterward, a pre-amplification thulium-based stage is discussed, with the focus of low-noise signal amplification, high signal gain and eliminating pulse distortions during pulse propagation in the gain medium. Following the pre-amplification stage a second gain stage is evaluated with incorporating a thulium-fiber of shorter length with increased rare-earth dopant ratio. Last a power-booster stage is analyzed, where the peak power of kilowatts should be achieved. Examples of analytical study are further validated by the experimental campaign. The simulation model is further corrected based on real components – parameters such as real insertion-losses, cross-talks, polarization dependencies, etc. are included. The second part of the paper evaluates the utilization of nonlinear phenomena, their specific features at the vicinity of 2000 nm, compared to e.g. 1550 nm, and presents supercontinuum modelling, based on the thulium laser pulsed output. Supercontinuum generation simulation is performed and provides reasonably accurate results, once fiber dispersion profile is precisely defined and fiber nonlinearity is known, furthermore input pulse shape and peak power must be known, which is assured thanks to the experimental measurement of the studied thulium pulsed laser. The supercontinuum simulation model is put in relation to designed and characterized specialty optical fibers, which are discussed in the third part of the paper. The focus is placed on silica and mainly on non-silica fibers (fluoride, chalcogenide, lead-silicate) in their conventional, microstructured or tapered variants. Parameters such as dispersion profile and nonlinearity of exploited fibers were characterized either with an accurate model, developed in COMSOL software or by direct experimental measurement to achieve even higher precision. The paper then combines all three studied topics and presents a possible application of such a thulium pulsed laser system working with specialty optical fibers.Keywords: nonlinear phenomena, specialty optical fibers, supercontinuum generation, thulium laser
Procedia PDF Downloads 321256 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students
Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara
Abstract:
BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to: cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer
Procedia PDF Downloads 114255 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 274254 A Long-Standing Methodology Quest Regarding Commentary of the Qur’an: Modern Debates on Function of Hermeneutics in the Quran Scholarship in Turkey
Authors: Merve Palanci
Abstract:
This paper aims to reveal and analyze methodology debates on Qur’an Commentary in Turkish Scholarship and to make sound inductions on the current situation, with reference to the literature evolving around the credibility of Hermeneutics when the case is Qur’an commentary and methodological connotations related to it, together with the other modern approaches to the Qur’an. It is fair to say that Tafseer, constituting one of the main parts of basic Islamic sciences, has drawn great attention from both Muslim and non-Muslim scholars for a long time. And with the emplacement of an acute junction between natural sciences and social sciences in the post-enlightenment period, this interest seems to pave the way for methodology discussions that are conducted by theology spheres, occupying a noticeable slot in Tafseer literature, as well. A panoramic glance at the classical treatise in relation to the methodology of Tafseer, namely Usul al-Tafseer, leads the reader to the conclusion that these classics are intrinsically aimed at introducing the Qur’an and its early history of formation as a corpus and providing a better understanding of its content. To illustrate, the earliest methodology work extant for Qur’an commentary, al- Aql wa’l Fahm al- Qur’an by Harith al-Muhasibi covers content that deals with Qur’an’s rhetoric, its muhkam and mutashabih, and abrogation, etc. And most of the themes in question are evident to share a common ground: understanding the Scripture and producing an accurate commentary to be built on this preliminary phenomenon of understanding. The content of other renowned works in an overtone of Tafseer methodology, such as Funun al Afnan, al- Iqsir fi Ilm al- Tafseer, and other succeeding ones al- Itqan and al- Burhan is also rich in hints related to preliminary phenomena of understanding. However, these works are not eligible for being classified as full-fledged methodology manuals assuring a true understanding of the Qur’an. And Hermeneutics is believed to supply substantial data applicable to Qur’an commentary as it deals with the nature of understanding itself. Referring to the latest tendencies in Tafseer methodology, this paper envisages to centralize hermeneutical debates in modern scholarship of Qur’an commentary and the incentives that lead scholars to apply for Hermeneutics in Tafseer literature. Inspired from these incentives, the study involves three parts. In the introduction part, this paper introduces key features of classical methodology works in general terms and traces back the main methodological shifts of modern times in Qur’an commentary. To this end, revisionist Ecole, scientific Qur’an commentary ventures, and thematic Qur’an commentary are included and analysed briefly. However, historical-critical commentary on the Quran, as it bears a close relationship with hermeneutics, is handled predominantly. The second part is based on the hermeneutical nature of understanding the Scripture, revealing a timeline for the beginning of hermeneutics debates in Tafseer, and Fazlur Rahman’s(d.1988) influence will be manifested for establishing a theoretical bridge. In the following part, reactions against the application of Hermeneutics in Tafseer activity and pro-hermeneutics works will be revealed through cross-references to the prominent figures of both, and the literature in question in theology scholarship in Turkey will be explored critically.Keywords: hermeneutics, Tafseer, methodology, Ulum al- Qur’an, modernity
Procedia PDF Downloads 75253 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection
Authors: Petra Hypšová, Martin Seitl
Abstract:
One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment
Procedia PDF Downloads 99252 Toward the Decarbonisation of EU Transport Sector: Impacts and Challenges of the Diffusion of Electric Vehicles
Authors: Francesca Fermi, Paola Astegiano, Angelo Martino, Stephanie Heitel, Michael Krail
Abstract:
In order to achieve the targeted emission reductions for the decarbonisation of the European economy by 2050, fundamental contributions are required from both energy and transport sectors. The objective of this paper is to analyse the impacts of a largescale diffusion of e-vehicles, either battery-based or fuel cells, together with the implementation of transport policies aiming at decreasing the use of motorised private modes in order to achieve greenhouse gas emission reduction goals, in the context of a future high share of renewable energy. The analysis of the impacts and challenges of future scenarios on transport sector is performed with the ASTRA (ASsessment of TRAnsport Strategies) model. ASTRA is a strategic system-dynamic model at European scale (EU28 countries, Switzerland and Norway), consisting of different sub-modules related to specific aspects: the transport system (e.g. passenger trips, tonnes moved), the vehicle fleet (composition and evolution of technologies), the demographic system, the economic system, the environmental system (energy consumption, emissions). A key feature of ASTRA is that the modules are linked together: changes in one system are transmitted to other systems and can feed-back to the original source of variation. Thanks to its multidimensional structure, ASTRA is capable to simulate a wide range of impacts stemming from the application of transport policy measures: the model addresses direct impacts as well as second-level and third-level impacts. The simulation of the different scenarios is performed within the REFLEX project, where the ASTRA model is employed in combination with several energy models in a comprehensive Modelling System. From the transport sector perspective, some of the impacts are driven by the trend of electricity price estimated from the energy modelling system. Nevertheless, the major drivers to a low carbon transport sector are policies related to increased fuel efficiency of conventional drivetrain technologies, improvement of demand management (e.g. increase of public transport and car sharing services/usage) and diffusion of environmentally friendly vehicles (e.g. electric vehicles). The final modelling results of the REFLEX project will be available from October 2018. The analysis of the impacts and challenges of future scenarios is performed in terms of transport, environmental and social indicators. The diffusion of e-vehicles produces a consistent reduction of future greenhouse gas emissions, although the decarbonisation target can be achieved only with the contribution of complementary transport policies on demand management and supporting the deployment of low-emission alternative energy for non-road transport modes. The paper explores the implications through time of transport policy measures on mobility and environment, underlying to what extent they can contribute to a decarbonisation of the transport sector. Acknowledgements: The results refer to the REFLEX project which has received grants from the European Union’s Horizon 2020 research and innovation program under Grant Agreement No. 691685.Keywords: decarbonisation, greenhouse gas emissions, e-mobility, transport policies, energy
Procedia PDF Downloads 153251 Digital Image Correlation Based Mechanical Response Characterization of Thin-Walled Composite Cylindrical Shells
Authors: Sthanu Mahadev, Wen Chan, Melanie Lim
Abstract:
Anisotropy dominated continuous-fiber composite materials have garnered attention in numerous mechanical and aerospace structural applications. Tailored mechanical properties in advanced composites can exhibit superiority in terms of stiffness-to-weight ratio, strength-to-weight ratio, low-density characteristics, coupled with significant improvements in fatigue resistance as opposed to metal structure counterparts. Extensive research has demonstrated their core potential as more than just mere lightweight substitutes to conventional materials. Prior work done by Mahadev and Chan focused on formulating a modified composite shell theory based prognosis methodology for investigating the structural response of thin-walled circular cylindrical shell type composite configurations under in-plane mechanical loads respectively. The prime motivation to develop this theory stemmed from its capability to generate simple yet accurate closed-form analytical results that can efficiently characterize circular composite shell construction. It showcased the development of a novel mathematical framework to analytically identify the location of the centroid for thin-walled, open cross-section, curved composite shells that were characterized by circumferential arc angle, thickness-to-mean radius ratio, and total laminate thickness. Ply stress variations for curved cylindrical shells were analytically examined under the application of centric tensile and bending loading. This work presents a cost-effective, small-platform experimental methodology by taking advantage of the full-field measurement capability of digital image correlation (DIC) for an accurate assessment of key mechanical parameters such as in-plane mechanical stresses and strains, centroid location etc. Mechanical property measurement of advanced composite materials can become challenging due to their anisotropy and complex failure mechanisms. Full-field displacement measurements are well suited for characterizing the mechanical properties of composite materials because of the complexity of their deformation. This work encompasses the fabrication of a set of curved cylindrical shell coupons, the design and development of a novel test-fixture design and an innovative experimental methodology that demonstrates the capability to very accurately predict the location of centroid in such curved composite cylindrical strips via employing a DIC based strain measurement technique. Error percentage difference between experimental centroid measurements and previously estimated analytical centroid results are observed to be in good agreement. The developed analytical modified-shell theory provides the capability to understand the fundamental behavior of thin-walled cylindrical shells and offers the potential to generate novel avenues to understand the physics of such structures at a laminate level.Keywords: anisotropy, composites, curved cylindrical shells, digital image correlation
Procedia PDF Downloads 316250 Isolation and Transplantation of Hepatocytes in an Experimental Model
Authors: Inas Raafat, Azza El Bassiouny, Waldemar L. Olszewsky, Nagui E. Mikhail, Mona Nossier, Nora E. I. El-Bassiouni, Mona Zoheiry, Houda Abou Taleb, Noha Abd El-Aal, Ali Baioumy, Shimaa Attia
Abstract:
Background: Orthotopic liver transplantation is an established treatment for patients with severe acute and end-stage chronic liver disease. The shortage of donor organs continues to be the rate-limiting factor for liver transplantation throughout the world. Hepatocyte transplantation is a promising treatment for several liver diseases and can, also, be used as a "bridge" to liver transplantation in cases of liver failure. Aim of the work: This study was designed to develop a highly efficient protocol for isolation and transplantation of hepatocytes in experimental Lewis rat model to provide satisfactory guidelines for future application on humans.Materials and Methods: Hepatocytes were isolated from the liver by double perfusion technique and bone marrow cells were isolated by centrifugation of shafts of tibia and femur of donor Lewis rats. Recipient rats were subjected to sub-lethal dose of irradiation 2 days before transplantation. In a laparotomy operation the spleen was injected by freshly isolated hepatocytes and bone marrow cells were injected intravenously. The animals were sacrificed 45 day latter and splenic sections were prepared and stained with H & E, PAS AFP and Prox1. Results: The data obtained from this study showed that the double perfusion technique is successful in separation of hepatocytes regarding cell number and viability. Also the method used for bone marrow cells separation gave excellent results regarding cell number and viability. Intrasplenic engraftment of hepatocytes and live tissue formation within the splenic tissue were found in 70% of cases. Hematoxylin and eosin stained splenic sections from 7 rats showed sheets and clusters of cells among the splenic tissues. Periodic Acid Schiff stained splenic sections from 7 rats showed clusters of hepatocytes with intensely stained pink cytoplasmic granules denoting the presence of glycogen. Splenic sections from 7 rats stained with anti-α-fetoprotein antibody showed brownish cytoplasmic staining of the hepatocytes denoting positive expression of AFP. Splenic sections from 7 rats stained with anti-Prox1 showed brownish nuclear staining of the hepatocytes denoting positive expression of Prox1 gene on these cells. Also, positive expression of Prox1 gene was detected on lymphocytes aggregations in the spleens. Conclusions: Isolation of liver cells by double perfusion technique using collagenase buffer is a reliable method that has a very satisfactory yield regarding cell number and viability. The intrasplenic route of transplantation of the freshly isolated liver cells in an immunocompromised model was found to give good results regarding cell engraftment and tissue formation. Further studies are needed to assess function of engrafted hepatocytes by measuring prothrombin time, serum albumin and bilirubin levels.Keywords: Lewis rats, hepatocytes, BMCs, transplantation, AFP, Prox1
Procedia PDF Downloads 317249 Supporting a Moral Growth Mindset Among College Students
Authors: Kate Allman, Heather Maranges, Elise Dykhuis
Abstract:
Moral Growth Mindset (MGM) is the belief that one has the capacity to become a more moral person, as opposed to a fixed conception of one’s moral ability and capacity (Han et al., 2018). Building from Dweck’s work in incremental implicit theories of intelligence (2008), Moral Growth Mindset (Han et al., 2020) extends growth mindsets into the moral dimension. The concept of MGM has the potential to help researchers understand how both mindsets and interventions can impact character development, and it has even been shown to have connections to voluntary service engagement (Han et al., 2018). Understanding the contexts in which MGM might be cultivated could help to promote the further cultivation of character, in addition to prosocial behaviors like service engagement, which may, in turn, promote larger scale engagement in social justice-oriented thoughts, feelings, and behaviors. In particular, college may be a place to intentionally cultivate a growth mindset toward moral capacities, given the unique developmental and maturational components of the college experience, including contextual opportunity (Lapsley & Narvaez, 2006) and independence requiring the constant consideration, revision, and internalization of personal values (Lapsley & Woodbury, 2016). In a semester-long, quasi-experimental study, we examined the impact of a pedagogical approach designed to cultivate college student character development on participants’ MGM. With an intervention (n=69) and a control group (n=97; Pre-course: 27% Men; 66% Women; 68% White; 18% Asian; 2% Black; <1% Hispanic/Latino), we investigated whether college courses that intentionally incorporate character education pedagogy (Lamb, Brant, Brooks, 2021) affect a variety of psychosocial variables associated with moral thoughts, feelings, identity, and behavior (e.g. moral growth mindset, honesty, compassion, etc.). The intervention group consisted of 69 undergraduate students (Pre-course: 40% Men; 52% Women; 68% White; 10.5% Black; 7.4% Asian; 4.2% Hispanic/Latino) that voluntarily enrolled in five undergraduate courses that encouraged students to engage with key concepts and methods of character development through the application of research-based strategies and personal reflection on goals and experiences. Moral Growth Mindset was measured using the four-item Moral Growth Mindset scale (Han et al., 2020), with items such as You can improve your basic morals and character considerably on a six-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Higher scores of MGM indicate a stronger belief that one can become a more moral person with personal effort. Reliability at Time 1 was Cronbach’s ɑ= .833, and at Time 2 Cronbach’s ɑ= .772. An Analysis of Covariance (ANCOVA) was conducted to explore whether post-course MGM scores were different between the intervention and control when controlling for pre-course MGM scores. The ANCOVA indicated significant differences in MGM between groups post-course, F(1,163) = 8.073, p = .005, R² = .11, where descriptive statistics indicate that intervention scores were higher than the control group at post-course. Results indicate that intentional character development pedagogy can be leveraged to support the development of Moral Growth Mindset and related capacities in undergraduate settings.Keywords: moral personality, character education, incremental theories of personality, growth mindset
Procedia PDF Downloads 147248 Evaluation of Antibiotic Resistance and Extended-Spectrum β-Lactamases Production Rates of Gram Negative Rods in a University Research and Practice Hospital, 2012-2015
Authors: Recep Kesli, Cengiz Demir, Onur Turkyilmaz, Hayriye Tokay
Abstract:
Objective: Gram-negative rods are a large group of bacteria, and include many families, genera, and species. Most clinical isolates belong to the family Enterobacteriaceae. Resistance due to the production of extended-spectrum β-lactamases (ESBLs) is a difficulty in the handling of Enterobacteriaceae infections, but other mechanisms of resistance are also emerging, leading to multidrug resistance and threatening to create panresistant species. We aimed in this study to evaluate resistance rates of Gram-negative rods bacteria isolated from clinical specimens in Microbiology Laboratory, Afyon Kocatepe University, ANS Research and Practice Hospital, between October 2012 and September 2015. Methods: The Gram-negative rods strains were identified by conventional methods and VITEK 2 automated identification system (bio-Mérieux, Marcy l’etoile, France). Antibiotic resistance tests were performed by both the Kirby-Bauer disk-diffusion and automated Antimicrobial Susceptibility Testing (AST, bio-Mérieux, Marcy l’etoile, France) methods. Disk diffusion results were evaluated according to the standards of Clinical and Laboratory Standards Institute (CLSI). Results: Of the totally isolated 1.701 Enterobacteriaceae strains 1434 (84,3%) were Klebsiella pneumoniae, 171 (10%) were Enterobacter spp., 96 (5.6%) were Proteus spp., and 639 Nonfermenting gram negatives, 477 (74.6%) were identified as Pseudomonas aeruginosa, 135 (21.1%) were Acinetobacter baumannii and 27 (4.3%) were Stenotrophomonas maltophilia. The ESBL positivity rate of the totally studied Enterobacteriaceae group were 30.4%. Antibiotic resistance rates for Klebsiella pneumoniae were as follows: amikacin 30.4%, gentamicin 40.1%, ampicillin-sulbactam 64.5%, cefepime 56.7%, cefoxitin 35.3%, ceftazidime 66.8%, ciprofloxacin 65.2%, ertapenem 22.8%, imipenem 20.5%, meropenem 20.5 %, and trimethoprim-sulfamethoxazole 50.1%, and for 114 Enterobacter spp were detected as; amikacin 26.3%, gentamicin 31.5%, cefepime 26.3%, ceftazidime 61.4%, ciprofloxacin 8.7%, ertapenem 8.7%, imipenem 12.2%, meropenem 12.2%, and trimethoprim-sulfamethoxazole 19.2 %. Resistance rates for Proteus spp. were: 24,3% meropenem, 26.2% imipenem, 20.2% amikacin 10.5% cefepim, 33.3% ciprofloxacin and levofloxacine, 31.6% ceftazidime, 20% ceftriaxone, 15.2% gentamicin, 26.6% amoxicillin-clavulanate, and 26.2% trimethoprim-sulfamethoxale. Resistance rates of P. aeruginosa was found as follows: Amikacin 32%, gentamicin 42 %, imipenem 43%, merpenem 43%, ciprofloxacin 50%, levofloxacin 52%, cefepim 38%, ceftazidim 63%, piperacillin/tacobactam 85%, for Acinetobacter baumannii; Amikacin 53.3%, gentamicin 56.6 %, imipenem 83%, merpenem 86%, ciprofloxacin 100%, ceftazidim 100%, piperacillin/tacobactam 85 %, colisitn 0 %, and for S. malthophilia; levofloxacin 66.6 % and trimethoprim/sulfamethoxozole 0 %. Conclusions: This study showed that resistance in Gram-negative rods was a serious clinical problem in our hospital and suggested the need to perform typification of the isolated bacteria with susceptibility testing regularly in the routine laboratory procedures. This application guided to empirical antibiotic treatment choices truly, as a consequence of the reality that each hospital shows different resistance profiles.Keywords: antibiotic resistance, gram negative rods, ESBL, VITEK 2
Procedia PDF Downloads 331247 Effect of Non-Thermal Plasma, Chitosan and Polymyxin B on Quorum Sensing Activity and Biofilm of Pseudomonas aeruginosa
Authors: Alena Cejkova, Martina Paldrychova, Jana Michailidu, Olga Matatkova, Jan Masak
Abstract:
Increasing the resistance of pathogenic microorganisms to many antibiotics is a serious threat to the treatment of infectious diseases and cleaning medical instruments. It should be added that the resistance of microbial populations growing in biofilms is often up to 1000 times higher compared to planktonic cells. Biofilm formation in a number of microorganisms is largely influenced by the quorum sensing regulatory mechanism. Finding external factors such as natural substances or physical processes that can interfere effectively with quorum sensing signal molecules should reduce the ability of the cell population to form biofilm and increase the effectiveness of antibiotics. The present work is devoted to the effect of chitosan as a representative of natural substances with anti-biofilm activity and non- thermal plasma (NTP) alone or in combination with polymyxin B on biofilm formation of Pseudomonas aeruginosa. Particular attention was paid to the influence of these agents on the level of quorum sensing signal molecules (acyl-homoserine lactones) during planktonic and biofilm cultivations. Opportunistic pathogenic strains of Pseudomonas aeruginosa (DBM 3081, DBM 3777, ATCC 10145, ATCC 15442) were used as model microorganisms. Cultivations of planktonic and biofilm populations in 96-well microtiter plates on horizontal shaker were used for determination of antibiotic and anti-biofilm activity of chitosan and polymyxin B. Biofilm-growing cells on titanium alloy, which is used for preparation of joint replacement, were exposed to non-thermal plasma generated by cometary corona with a metallic grid for 15 and 30 minutes. Cultivation followed in fresh LB medium with or without chitosan or polymyxin B for next 24 h. Biofilms were quantified by crystal violet assay. Metabolic activity of the cells in biofilm was measured using MTT (3-[4,5-dimethylthiazol-2-yl]-2,5 diphenyl tetrazolium bromide) colorimetric test based on the reduction of MTT into formazan by the dehydrogenase system of living cells. Activity of N-acyl homoserine lactones (AHLs) compounds involved in the regulation of biofilm formation was determined using Agrobacterium tumefaciens strain harboring a traG::lacZ/traR reporter gene responsive to AHLs. The experiments showed that both chitosan and non-thermal plasma reduce the AHLs level and thus the biofilm formation and stability. The effectiveness of both agents was somewhat strain dependent. During the eradication of P. aeruginosa DBM 3081 biofilm on titanium alloy induced by chitosan (45 mg / l) there was an 80% decrease in AHLs. Applying chitosan or NTP on the P. aeruginosa DBM 3777 biofilm did not cause a significant decrease in AHLs, however, in combination with both (chitosan 55 mg / l and NTP 30 min), resulted in a 70% decrease in AHLs. Combined application of NTP and polymyxin B allowed reduce antibiotic concentration to achieve the same level of AHLs inhibition in P. aeruginosa ATCC 15442. The results shown that non-thermal plasma and chitosan have considerable potential for the eradication of highly resistant P. aeruginosa biofilms, for example on medical instruments or joint implants.Keywords: anti-biofilm activity, chitosan, non-thermal plasma, opportunistic pathogens
Procedia PDF Downloads 200246 Unravelling Glyphosates Disruptive Effects on the Photochemical Efficiency of Amaranthus cruentus
Authors: Jacques M. Berner, Lehlogonolo Maloma
Abstract:
Context: Glyphosate, a widely used herbicide, has raised concerns about its impact on various crops. Amaranthus cruentus, an important grain crop species, is particularly susceptible to glyphosate. Understanding the specific disruptions caused by glyphosate on the photosynthetic process in Amaranthus cruentus is crucial for assessing its effects on crop productivity and ecological sustainability. Research Aim: This study aimed to investigate the dose-dependent impact of glyphosate on the photochemical efficiency of Amaranthus cruentus using the OJIP transient analysis. The goal was to assess the specific disruptions caused by glyphosate on key parameters of photosystem II. Methodology: The experiment was conducted in a controlled greenhouse environment. Amaranthus cruentus plants were exposed to different concentrations of glyphosate, including half, recommended, and double the recommended application rates. The photochemical efficiency of the plants was evaluated using non-invasive chlorophyll a fluorescence measurements and subsequent analysis of OJIP transients. Measurements were taken on 1-hour dark-adapted leaves using a Hansatech Handy PEA+ chlorophyll fluorimeter. Findings: The study's results demonstrated a significant reduction in the photochemical efficiency of Amaranthus cruentus following glyphosate treatment. The OJIP transients showed distinct alterations in the glyphosate-treated plants compared to the control group. These changes included a decrease in maximal fluorescence (FP) and a delay in the rise of the fluorescence signal, indicating impairment in the energy conversion process within the photosystem II. Glyphosate exposure also led to a substantial decrease in the maximum quantum yield efficiency of photosystem II (FV/FM) and the total performance index (PItotal), which reflects the overall photochemical efficiency of photosystem II. These reductions in photochemical efficiency were observed even at half the recommended dose of glyphosate. Theoretical Importance: The study provides valuable insights into the specific disruptions caused by glyphosate on the photochemical efficiency of Amaranthus cruentus. Data Collection and Analysis Procedures: Data collection involved non-invasive chlorophyll a fluorescence measurements using a chlorophyll fluorimeter on dark-adapted leaves. The OJIP transients were then analyzed to assess specific disruptions in key parameters of photosystem II. Statistical analysis was conducted to determine the significance of the differences observed between glyphosate-treated plants and the control group. Question Addressed: The study aimed to address the question of how glyphosate exposure affects the photochemical efficiency of Amaranthus cruentus, specifically examining disruptions in the photosynthetic electron transport chain and overall photochemical efficiency. Conclusion: The study demonstrates that glyphosate severely impairs the photochemical efficiency of Amaranthus cruentus, as indicated by the alterations in OJIP transients. Even at half the recommended dose, glyphosate caused significant reductions in photochemical efficiency. These findings highlight the detrimental effects of glyphosate on crop productivity and emphasize the need for further research to evaluate its long-term consequences and ecological implications in agriculture. The authors gratefully acknowledge the support from North-West University for making this research possible.Keywords: glyphosate, amaranthus cruentus, ojip transient analysis, pitotal, photochemical efficiency, chlorophyll fluorescence, weeds
Procedia PDF Downloads 91245 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 23244 Soybean Lecithin Based Reverse Micellar Extraction of Pectinase from Synthetic Solution
Authors: Sivananth Murugesan, I. Regupathi, B. Vishwas Prabhu, Ankit Devatwal, Vishnu Sivan Pillai
Abstract:
Pectinase is an important enzyme which has a wide range of applications including textile processing and bioscouring of cotton fibers, coffee and tea fermentation, purification of plant viruses, oil extraction etc. Selective separation and purification of pectinase from fermentation broth and recover the enzyme form process stream for reuse are cost consuming process in most of the enzyme based industries. It is difficult to identify a suitable medium to enhance enzyme activity and retain its enzyme characteristics during such processes. The cost effective, selective separation of enzymes through the modified Liquid-liquid extraction is of current research interest worldwide. Reverse micellar extraction, globally acclaimed Liquid-liquid extraction technique is well known for its separation and purification of solutes from the feed which offers higher solute specificity and partitioning, ease of operation and recycling of extractants used. Surfactant concentrations above critical micelle concentration to an apolar solvent form micelles and addition of micellar phase to water in turn forms reverse micelles or water-in-oil emulsions. Since, electrostatic interaction plays a major role in the separation/purification of solutes using reverse micelles. These interaction parameters can be altered with the change in pH, addition of cosolvent, surfactant and electrolyte and non-electrolyte. Even though many chemical based commercial surfactant had been utilized for this purpose, the biosurfactants are more suitable for the purification of enzymes which are used in food application. The present work focused on the partitioning of pectinase from the synthetic aqueous solution within the reverse micelle phase formed by a biosurfactant, Soybean Lecithin dissolved in chloroform. The critical micelle concentration of soybean lecithin/chloroform solution was identified through refractive index and density measurements. Effect of surfactant concentrations above and below the critical micelle concentration was considered to study its effect on enzyme activity, enzyme partitioning within the reverse micelle phase. The effect of pH and electrolyte salts on the partitioning behavior was studied by varying the system pH and concentration of different salts during forward and back extraction steps. It was observed that lower concentrations of soybean lecithin enhanced the enzyme activity within the water core of the reverse micelle with maximizing extraction efficiency. The maximum yield of pectinase of 85% with a partitioning coefficient of 5.7 was achieved at 4.8 pH during forward extraction and 88% yield with a partitioning coefficient of 7.1 was observed during backward extraction at a pH value of 5.0. However, addition of salt decreased the enzyme activity and especially at higher salt concentrations enzyme activity declined drastically during both forward and back extraction steps. The results proved that reverse micelles formed by Soybean Lecithin and chloroform may be used for the extraction of pectinase from aqueous solution. Further, the reverse micelles can be considered as nanoreactors to enhance enzyme activity and maximum utilization of substrate at optimized conditions, which are paving a way to process intensification and scale-down.Keywords: pectinase, reverse micelles, soybean lecithin, selective partitioning
Procedia PDF Downloads 372243 Eco-City Planning and Urban Design in Lagos, Nigeria: Recent Innovations, Trends, Concerns, Challenges, and Solutions
Authors: Dahunsi Michael Oluseyi
Abstract:
This paper aims to extensively examine eco-city planning and urban design in Lagos, Nigeria. It will delve into the city's developments, challenges, and potential solutions to offer insights for sustainable urban growth within the rapidly expanding urban landscape. The research will scrutinize recent innovations, emerging trends, and practical remedies to promote ecological sustainability within an urban framework. It will encompass a more in-depth review of current literature, case studies, and qualitative analyses, thereby augmenting the depth and breadth of the research. The objectives are to assess the current eco-city planning initiatives and urban design trends in Lagos, Nigeria, considering the city's unique characteristics and challenges. To identify and analyze the challenges encountered during the implementation of eco-friendly urban developments in Lagos, to explore and evaluate the innovative and practical solutions that are implemented to promote sustainability within the city, to provide comprehensive insights and actionable recommendations for policymakers, urban planners, and other stakeholders involved in sustainable urban development in Lagos, the rapid urbanization of Lagos has brought forth a myriad of challenges, including a burgeoning population, inadequate infrastructure, waste management issues, and environmental pollution. Eco-city planning has emerged as a promising approach to addressing these obstacles, striving to create urban spaces that are more habitable, resource-efficient, and environmentally friendly. This research holds substantial importance in exploring the application of eco-city planning principles within a megacity like Lagos. Analyzing recent innovations, trends, concerns, challenges, and solutions provides invaluable insights for policymakers, urban planners, and stakeholders dedicated to fostering sustainable urban development. The methodologies employed in this research are structured to embrace a multifaceted and intricate approach, aiming to facilitate a comprehensive understanding of the complexities inherent in eco-city planning and urban design in Lagos, Nigeria. This methodological framework is designed to encompass various diverse strategies and analytical tools to effectively capture the multidimensional aspects of sustainable urban development. It involves an in-depth analysis of academic publications, governmental reports, and urban planning documents to highlight global eco-city planning trends and gather Lagos-specific insights through a detailed exploration of eco-friendly initiatives and projects in Lagos to evaluate successes, challenges, and strategies for addressing environmental concerns by engaging key stakeholders, including urban planners, policymakers, environmental experts, and residents, to collect firsthand perspectives, concerns, and insights. Also, a thorough analysis will be carried out on data collected from literature reviews, case studies, interviews, and surveys used to extract prevalent patterns, challenges, and innovative solutions from diverse sources. This study aims to contribute to the discourse on sustainable urban development by offering a comprehensive analysis of eco-city planning in Lagos and providing practical recommendations for a more sustainable urban future.Keywords: eco-friendly, innovation, sustainability, stakeholders
Procedia PDF Downloads 62