Search results for: efficient service
384 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 263383 Preparation of Activated Carbon From Waste Feedstock: Activation Variables Optimization and Influence
Authors: Oluwagbemi Victor Aladeokin
Abstract:
In the last decade, the global peanut cultivation has seen increased demand, which is attributed to their health benefits, rising to ~ 41.4 MMT in 2019/2020. Peanut and other nutshells are considered as waste in various parts of the world and are usually used for their fuel value. However, this agricultural by-product can be converted to a higher value product such as activated carbon. For many years, due to the highly porous structure of activated carbon, it has been widely and effectively used as an adsorbent in the purification and separation of gases and liquids. Those used for commercial purposes are primarily made from a range of precursors such as wood, coconut shell, coal, bones, etc. However, due to difficulty in regeneration and high cost, various agricultural residues such as rice husk, corn stalks, apricot stones, almond shells, coffee beans, etc, have been explored to produce activated carbons. In the present study, the potential of peanut shells as precursors in the production of activated carbon and their adsorption capacity is investigated. Usually, precursors used to produce activated carbon have carbon content above 45 %. A typical raw peanut shell has 42 wt.% carbon content. To increase the yield, this study has employed chemical activation method using zinc chloride. Zinc chloride is well known for its effectiveness in increasing porosity of porous carbonaceous materials. In chemical activation, activation temperature and impregnation ratio are parameters commonly reported to be the most significant, however, this study has also studied the influence of activation time on the development of activated carbon from peanut shells. Activated carbons are applied for different purposes, however, as the application of activated carbon becomes more specific, an understanding of the influence of activation variables to have a better control of the quality of the final product becomes paramount. A traditional approach to experimentally investigate the influence of the activation parameters, involves varying each parameter at a time. However, a more efficient way to reduce the number of experimental runs is to apply design of experiment. One of the objectives of this study is to optimize the activation variables. Thus, this work has employed response surface methodology of design of experiment to study the interactions between the activation parameters and consequently optimize the activation parameters (temperature, impregnation ratio, and activation time). The optimum activation conditions found were 485 °C, 15 min and 1.7, temperature, activation time, and impregnation ratio respectively. The optimum conditions resulted in an activated carbon with relatively high surface area ca. 1700 m2/g, 47 % yield, relatively high density, low ash, and high fixed carbon content. Impregnation ratio and temperature were found to mostly influence the final characteristics of the produced activated carbon from peanut shells. The results of this study, using response surface methodology technique, have revealed the potential and the most significant parameters that influence the chemical activation process, of peanut shells to produce activated carbon which can find its use in both liquid and gas phase adsorption applications.Keywords: chemical activation, fixed carbon, impregnation ratio, optimum, surface area
Procedia PDF Downloads 146382 Magnetofluidics for Mass Transfer and Mixing Enhancement in a Micro Scale Device
Authors: Majid Hejazian, Nam-Trung Nguyen
Abstract:
Over the past few years, microfluidic devices have generated significant attention from industry and academia due to advantages such as small sample volume, low cost and high efficiency. Microfluidic devices have applications in chemical, biological and industry analysis and can facilitate assay of bio-materials and chemical reactions, separation, and sensing. Micromixers are one of the important microfluidic concepts. Micromixers can work as stand-alone devices or be integrated in a more complex microfluidic system such as a lab on a chip (LOC). Micromixers are categorized as passive and active types. Passive micromixers rely only on the arrangement of the phases to be mixed and contain no moving parts and require no energy. Active micromixers require external fields such as pressure, temperature, electric and acoustic fields. Rapid and efficient mixing is important for many applications such as biological, chemical and biochemical analysis. Achieving fast and homogenous mixing of multiple samples in the microfluidic devices has been studied and discussed in the literature recently. Improvement in mixing rely on effective mass transport in microscale, but are currently limited to molecular diffusion due to the predominant laminar flow in this size scale. Using magnetic field to elevate mass transport is an effective solution for mixing enhancement in microfluidics. The use of a non-uniform magnetic field to improve mass transfer performance in a microfluidic device is demonstrated in this work. The phenomenon of mixing ferrofluid and DI-water streams has been reported before, but mass transfer enhancement for other non-magnetic species through magnetic field have not been studied and evaluated extensively. In the present work, permanent magnets were used in a simple microfluidic device to create a non-uniform magnetic field. Two streams are introduced into the microchannel: one contains fluorescent dye mixed with diluted ferrofluid to induce enhanced mass transport of the dye, and the other one is a non-magnetic DI-water stream. Mass transport enhancement of fluorescent dye is evaluated using fluorescent measurement techniques. The concentration field is measured for different flow rates. Due to effect of magnetic field, a body force is exerted on the paramagnetic stream and expands the ferrofluid stream into non-magnetic DI-water flow. The experimental results demonstrate that without a magnetic field, both magnetic nanoparticles of the ferrofluid and the fluorescent dye solely rely on molecular diffusion to spread. The non-uniform magnetic field, created by the permanent magnets around the microchannel, and diluted ferrofluid can improve mass transport of non-magnetic solutes in a microfluidic device. The susceptibility mismatch between the fluids results in a magnetoconvective secondary flow towards the magnets and subsequently the mass transport of the non-magnetic fluorescent dye. A significant enhancement in mass transport of the fluorescent dye was observed. The platform presented here could be used as a microfluidics-based micromixer for chemical and biological applications.Keywords: ferrofluid, mass transfer, micromixer, microfluidics, magnetic
Procedia PDF Downloads 225381 Development and Modelling of Cellulose Nano-Crystal from Agricultural Wastes for Adsorptive Removal of Pharmaceuticals in Wastewater
Authors: Abubakar Muhammad Hammari, Usman Dadum Hamza, Maryam Ibrahim, Kabir Garba, Idris Muhammad Misau, .
Abstract:
Pharmaceuticals are increasingly present in water systems, posing threats to ecosystems and human health. The effective treatment of pharmaceutical wastewater presents a significant challenge due to the complex and diverse organic and inorganic contaminants it contains. Conventional treatment methods often struggle to completely remove these pollutants due to their stability and water solubility, leading to environmental concerns and potential health risks. This research proposes the use of cellulose nanocrystals (CNCs) derived from agricultural waste as efficient and sustainable adsorbents for pharmaceutical wastewater treatment. CNCs offer high surface area, biodegradability, and low cost compared to existing options. This study evaluates the production, characterization, adsorption properties, and reusability of cellulose nanocrystals (CNCs) derived from waste paper (CNC-WP), rice husk (CNC-RH), and groundnut shell (CNC-GS). The percentage yield of CNCs was highest from wastepaper at 50.67%, followed by groundnut shell at 33.40% and rice husk at 26.46%. X-ray diffraction (XRD) confirmed the cellulose crystalline structure across all samples while scanning electron microscopy (SEM) revealed a needle-like morphology with size distribution variations. Energy-dispersive X-ray spectroscopy (EDX) identified carbon and oxygen as the primary elements, with minor residual inorganic materials varying by source. BET analysis indicated high surface areas for all CNCs, with CNC-RH exhibiting the highest value (464.592 m²/g), suggesting a more porous structure. The pore sizes of all samples fell within the meso-pore range (2.108 nm to 2.153 nm). Adsorption studies focused on metronidazole (MNZ) removal using CNC-WP. Isotherm models, including Langmuir and Sips, described the equilibrium between MNZ concentration and adsorption onto CNC-WP, showing the best fit with R² values exceeding 0.95. The adsorption process was favourable, with monolayer coverage and potential binding energy heterogeneity. Kinetic modelling identified the pseudo-second-order model as the best fit (R² = 1, SSE = 5.00 x 10-₇), indicating chemisorption as the predominant mechanism. Thermodynamic analysis revealed negative ΔG values at all temperatures, indicating spontaneous adsorption, with more favourable adsorption at higher temperatures. The adsorption process was exothermic, as indicated by negative ΔH values. Reusability studies demonstrated that CNC-WP retained high MNZ removal efficiency, with a modest decrease from 99.59% to 89.11% over ten regeneration cycles. This study highlights the efficiency of wastepaper as a raw material for CNC production and its potential for effective and reusable MNZ adsorption.Keywords: cellulose nanocrystals (CNCs), adsorption efficiency, metronidazole removal, reusability
Procedia PDF Downloads 5380 For Whom Is Legal Aid: A Critical Analysis of the State-Funded Legal Aid in Criminal Cases in Tajikistan
Authors: Umeda Junaydova
Abstract:
Legal aid is a key element of access to justice. According to UN Principles and Guidelines on Access to Legal Aid in Criminal Justice Systems, state members bear the obligation to put in place accessible, effective, sustainable, and credible legal aid systems. Regarding this obligation, developing countries, such as Tajikistan, faced challenges in terms of financing this system. Thus, many developed nations have launched rule-of-law programs to support these states and ensure access to justice for all. Following independence from the Soviet Union, Tajikistan committed to introducing the rule of law and providing access to justice. This newly established country was weak, and the sudden outbreak of civil war aggravated the situation even more. The country needed external support and opened its door to attract foreign donors to assist it in its way to development. In 2015, Tajikistan, with the financial support of development partners, was able to establish a state-funded legal aid system that provides legal assistance to vulnerable and marginalized populations, including in criminal cases. In the beginning, almost the whole system was financed from donor funds; by that time, the contribution of the government gradually increased, and currently, it covers 80% of the total budget. All these governments' actions toward ensuring access to criminal legal aid for disadvantaged groups look promising; however, the reality is completely different. Currently, not all disadvantaged people are covered by these services, and their cases are most of the time considered without appropriate defense, which leads to violation of fundamental human rights. This research presents a comprehensive exploration of the interplay between donor assistance and the effectiveness of legal aid services in Tajikistan, with a specific focus on criminal cases involving vulnerable groups, such as women and children. In the context of Tajikistan, this study addresses a pressing concern: despite substantial financial support from international donors, state-funded legal aid services often fall short of meeting the needs of poor and vulnerable populations. The study delves into the underlying complexities of this issue and examines the structural, operational, and systemic challenges faced by legal aid providers, shedding light on the factors contributing to the ineffectiveness of legal aid services. Furthermore, it seeks to identify the root causes of these issues, revealing the barriers that hinder the delivery of adequate legal aid services. The research adopts a socio-legal methodology to ensure an appropriate combination of multiple methodologies. The findings of this research hold significant implications for both policymakers and practitioners, offering insights into the enhancement of legal aid services and access to justice for disadvantaged and marginalized populations in Tajikistan. By addressing these pressing questions, this study aims to fill the gap in legal literature and contribute to the development of a more equitable and efficient legal aid system that better serves the needs of the most vulnerable members of society.Keywords: access to justice, legal aid, rule of law, rights for council
Procedia PDF Downloads 52379 Enhancing the Effectiveness of Witness Examination through Deposition System in Korean Criminal Trials: Insights from the U.S. Evidence Discovery Process
Authors: Qi Wang
Abstract:
With the expansion of trial-centered principles, the importance of witness examination in Korean criminal proceedings has been increasingly emphasized. However, several practical challenges have emerged in courtroom examinations, including concerns about witnesses’ memory deterioration due to prolonged trial periods, the possibility of inaccurate testimony due to courtroom anxiety and tension, risks of testimony retraction, and witnesses’ refusal to appear. These issues have led to a decline in the effective utilization of witness testimony. This study analyzes the deposition system, which is widely used in the U.S. evidence discovery process, and examines its potential implementation within the Korean criminal procedure framework. Furthermore, it explores the scope of application, procedural design, and measures to prevent potential abuse if the system were to be adopted. Under the adversarial litigation structure that has evolved through several amendments to the Criminal Procedure Act, the deposition system, although conducted pre-trial, serves as a preliminary procedure to facilitate efficient and effective witness examination during trial. This system not only aligns with the goal of discovering substantive truth but also upholds the practical ideals of trial-centered principles while promoting judicial economy. Furthermore, with the legal foundation established by Article 266 of the Criminal Procedure Act and related provisions, this study concludes that the implementation of the deposition system is both feasible and appropriate for the Korean criminal justice system. The specific functions of depositions include providing case-related information to refresh witnesses’ memory as a preliminary to courtroom examination, pre-reviewing existing statement documents to enhance trial efficiency, and conducting preliminary examinations on key issues and anticipated questions. The subsequent courtroom witness examination focuses on verifying testimony through public and cross-examination, identifying and analyzing contradictions in testimony, and conducting double verification of testimony credibility under judicial supervision. Regarding operational aspects, both prosecution and defense may request depositions, subject to court approval. The deposition process involves video or audio recording, complete documentation by court reporters, and the preparation of transcripts, with copies provided to all parties and the original included in court records. The admissibility of deposition transcripts is recognized under Article 311 of the Criminal Procedure Act. Given prosecutors’ advantageous position in evidence collection, which may lead to indifference or avoidance of depositions, the study emphasizes the need to reinforce prosecutors’ public interest status and objective duties. Additionally, it recommends strengthening pre-employment ethics education and post-violation disciplinary measures for prosecutors.Keywords: witness examination, deposition system, Korean criminal procedure, evidence discovery, trial-centered principle
Procedia PDF Downloads 12378 Community Strengths and Indigenous Resilience as Drivers for Health Reform Change
Authors: Shana Malio-Satele, Lemalu Silao Vaisola Sefo
Abstract:
Introductory Statement: South Seas Healthcare is Ōtara’s largest Pacific health provider in South Auckland, New Zealand. Our vision is excellent health and well-being for Pacific people and all communities through strong Pacific values. During the DELTA and Omicron outbreak of COVID-19, our Pacific people, indigenous Māori, and the community of South Auckland were disproportionately affected and faced significant hardship with existing inequities magnified. This study highlights the community-based learnings of harnessing community-based strengths such as indigenous resilience, family-informed experiences and stories that provide critical insights that inform health reform changes that will be sustainable and equitable for all indigenous populations. This study is based on critical learnings acquired during COVID-19 that challenge the deficit narrative common in healthcare about indigenous populations. This study shares case studies of marginalised groups and religious groups and the successful application of indigenous cultural strengths, such as collectivism, positive protective factors, and using trusted relationships to create meaningful change in the way healthcare is delivered. The significance of this study highlights the critical conditions needed to adopt a community-informed way of creating integrated healthcare that works and the role that the community can play in being part of the solution. Methodologies: Key methodologies utilised are indigenous and Pacific-informed. To achieve critical learnings from the community, Pacific research methodologies, heavily informed by the Polynesian practice, were applied. Specifically, this includes; Teu Le Va (Understanding the importance of trusted relationships as a way of creating positive health solutions); The Fonofale Methodology (A way of understanding how health incorporates culture, family, the physical, spiritual, mental and other dimensions of health, as well as time, context and environment; The Fonua Methodology – Understanding the overall wellbeing and health of communities, families and individuals and their holistic needs and environmental factors and the Talanoa methodology (Researching through conversation, where understanding the individual and community is through understanding their history and future through stories). Major Findings: Key findings in the study included: 1. The collectivist approach in the community is a strengths-based response specific to populations, which highlights the importance of trusted relationships and cultural values to achieve meaningful outcomes. 2. The development of a “village model” which identified critical components to achieving health reform change; system navigation, a sense of service that was culturally responsive, critical leadership roles, culturally appropriate support, and the ability to influence the system enablers to support an alternative way of working. Concluding Statement: There is a strong connection between community-based strengths being implemented into healthcare strategies and reforms and the sustainable success of indigenous populations and marginalised communities accessing services that are cohesive, equitably resourced, accessible and meaningful for families. This study highlights the successful community-informed approaches and practices used during the COVID-19 response in New Zealand that are now being implemented in the current health reform.Keywords: indigenous voice, community voice, health reform, New Zealand
Procedia PDF Downloads 91377 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 65376 Adapting Hazard Analysis and Critical Control Points (HACCP) Principles to Continuing Professional Education
Authors: Yaroslav Pavlov
Abstract:
In the modern world, ensuring quality has become increasingly important in various fields of human activity. One universal approach to quality management, proven effective in the food industry, is the HACCP (Hazard Analysis and Critical Control Points) concept. Based on principles of preventing potential hazards to consumers at all stages of production, from raw materials to the final product, HACCP offers a systematic approach to identifying, assessing risks, and managing critical control points (CCPs). Initially used primarily for food production, it was later effectively adapted to the food service sector. Implementing HACCP provides organizations with a reliable foundation for improving food safety, covering all links in the food chain from producer to consumer, making it an integral part of modern quality management systems. The main principles of HACCP—hazard identification, CCP determination, effective monitoring procedures, corrective actions, regular checks, and documentation—are universal and can be adapted to other areas. The adaptation of the HACCP concept is relevant for continuing professional education (CPE) with certain reservations. Specifically, it is reasonable to abandon the term ‘hazards’ as deviations in CCPs do not pose dangers, unlike in food production. However, the approach through CCP analysis and the use of HACCP's main principles for educational services are promising. This is primarily because it allows for identifying key CCPs based on the value creation model of a specific educational organization and consequently focusing efforts on specific CCPs to manage the quality of educational services. This methodology can be called the Analysis of Critical Points in Educational Services (ACPES). ACPES offers a similar approach to managing the quality of educational services, focusing on preventing and eliminating potential risks that could negatively impact the educational process, learners' achievement of set educational goals, and ultimately lead to students rejecting the organization's educational services. ACPES adapts proven HACCP principles to educational services, enhancing quality management effectiveness and student satisfaction. ACPES includes identifying potential problems at all stages of the educational process, from initial interest to graduation and career development. In ACPES, the term "hazards" is replaced with "problematic areas," reflecting the specific nature of the educational environment. Special attention is paid to determining CCPs—stages where corrective measures can most effectively prevent or minimize the risk of failing educational goals. The ACPES principles align with HACCP's principles, adjusted for the specificities of CPE. The method of the learner's journey map (variation of Customer Journey Map, CJM) can be used to overcome the complexity of formalizing the production chain in educational services. CJM provides a comprehensive understanding of the learner's experience at each stage, facilitating targeted and effective quality management. Thus, integrating the learner's journey map into ACPES represents a significant extension of the methodology's capabilities, ensuring a comprehensive understanding of the educational process and forming an effective quality management system focused on meeting learners' needs and expectations.Keywords: quality management, continuing professional education, customer journey map, HACCP
Procedia PDF Downloads 38375 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 148374 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics
Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin
Abstract:
Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.Keywords: convolutional neural networks, deep learning, shallow correctors, sign language
Procedia PDF Downloads 101373 Application of Typha domingensis Pers. in Artificial Floating for Sewage Treatment
Authors: Tatiane Benvenuti, Fernando Hamerski, Alexandre Giacobbo, Andrea M. Bernardes, Marco A. S. Rodrigues
Abstract:
Population growth in urban areas has caused damages to the environment, a consequence of the uncontrolled dumping of domestic and industrial wastewater. The capacity of some plants to purify domestic and agricultural wastewater has been demonstrated by several studies. Since natural wetlands have the ability to transform, retain and remove nutrients, constructed wetlands have been used for wastewater treatment. They are widely recognized as an economical, efficient and environmentally acceptable means of treating many different types of wastewater. T. domingensis Pers. species have shown a good performance and low deployment cost to extract, detoxify and sequester pollutants. Constructed Floating Wetlands (CFWs) consist of emergent vegetation established upon a buoyant structure, floating on surface waters. The upper parts of the vegetation grow and remain primarily above the water level, while the roots extend down in the water column, developing an extensive under water-level root system. Thus, the vegetation grows hydroponically, performing direct nutrient uptake from the water column. Biofilm is attached on the roots and rhizomes, and as physical and biochemical processes take place, the system functions as a natural filter. The aim of this study is to diagnose the application of macrophytes in artificial floating in the treatment of domestic sewage in south Brazil. The T. domingensis Pers. plants were placed in a flotation system (polymer structure), in full scale, in a sewage treatment plant. The sewage feed rate was 67.4 m³.d⁻¹ ± 8.0, and the hydraulic retention time was 11.5 d ± 1.3. This CFW treat the sewage generated by 600 inhabitants, which corresponds to 12% of the population served by this municipal treatment plant. During 12 months, samples were collected every two weeks, in order to evaluate parameters as chemical oxygen demand (COD), biochemical oxygen demand in 5 days (BOD5), total Kjeldahl nitrogen (TKN), total phosphorus, total solids, and metals. The average removal of organic matter was around 55% for both COD and BOD5. For nutrients, TKN was reduced in 45.9% what was similar to the total phosphorus removal, while for total solids the reduction was 33%. For metals, aluminum, copper, and cadmium, besides in low concentrations, presented the highest percentage reduction, 82.7, 74.4 and 68.8% respectively. Chromium, iron, and manganese removal achieved values around 40-55%. The use of T. domingensis Pers. in artificial floating for sewage treatment is an effective and innovative alternative in Brazilian sewage treatment systems. The evaluation of additional parameters in the treatment system may give useful information in order to improve the removal efficiency and increase the quality of the water bodies.Keywords: constructed wetland, floating system, sewage treatment, Typha domingensis Pers.
Procedia PDF Downloads 212372 Electro-Hydrodynamic Effects Due to Plasma Bullet Propagation
Authors: Panagiotis Svarnas, Polykarpos Papadopoulos
Abstract:
Atmospheric-pressure cold plasmas continue to gain increasing interest for various applications due to their unique properties, like cost-efficient production, high chemical reactivity, low gas temperature, adaptability, etc. Numerous designs have been proposed for these plasmas production in terms of electrode configuration, driving voltage waveform and working gas(es). However, in order to exploit most of the advantages of these systems, the majority of the designs are based on dielectric-barrier discharges (DBDs) either in filamentary or glow regimes. A special category of the DBD-based atmospheric-pressure cold plasmas refers to the so-called plasma jets, where a carrier noble gas is guided by the dielectric barrier (usually a hollow cylinder) and left to flow up to the atmospheric air where a complicated hydrodynamic interplay takes place. Although it is now well established that these plasmas are generated due to ionizing waves reminding in many ways streamer propagation, they exhibit discrete characteristics which are better mirrored on the terms 'guided streamers' or 'plasma bullets'. These 'bullets' travel with supersonic velocities both inside the dielectric barrier and the channel formed by the noble gas during its penetration into the air. The present work is devoted to the interpretation of the electro-hydrodynamic effects that take place downstream of the dielectric barrier opening, i.e., in the noble gas-air mixing area where plasma bullet propagate under the influence of local electric fields in regions of variable noble gas concentration. Herein, we focus on the role of the local space charge and the residual ionic charge left behind after the bullet propagation in the gas flow field modification. The study communicates both experimental and numerical results, coupled in a comprehensive manner. The plasma bullets are here produced by a custom device having a quartz tube as a dielectric barrier and two external ring-type electrodes driven by sinusoidal high voltage at 10 kHz. Helium gas is fed to the tube and schlieren photography is employed for mapping the flow field downstream of the tube orifice. Mixture mass conservation equation, momentum conservation equation, energy conservation equation in terms of temperature and helium transfer equation are simultaneously solved, leading to the physical mechanisms that govern the experimental results. Namely, we deal with electro-hydrodynamic effects mainly due to momentum transfer from atomic ions to neutrals. The atomic ions are left behind as residual charge after the bullet propagation and gain energy from the locally created electric field. The electro-hydrodynamic force is eventually evaluated.Keywords: atmospheric-pressure plasmas, dielectric-barrier discharges, schlieren photography, electro-hydrodynamic force
Procedia PDF Downloads 139371 Designing Gender-Inclusive Urban Space: A Vision for Women’s Safety Cross-Country Movement at Indo-Nepal Border Case of Biratnagar/Jogbani
Authors: Sujan Kumari Chaudhary
Abstract:
Indo-Nepal border Biratnaga/Jogbani is a hub where economic exchange and cultural ties are forged daily. Furthermore, the porous and open border not only allows for the free movement of people and goods, but it also makes women more vulnerable, as they are frequently harassed by local authorities who are supposed to protect them. On the other hand, drug users roam around this region where many women experience that the place is not safe for women and girls. Moreover, women's safety is compromised by the open-border policy, which makes it difficult to control trafficking routes. Whereas the city was supposed to be a gateway for economic and cultural exchange, due to a lack of sensitivity in urban planning regarding gender, Biratnagar, and Jogbani have emerged as spaces of insecurity, limiting the free movement of women to various opportunities. The research explores a comprehensive analysis, identifies the gap in existing conditions, and proposes women-centered improvement focusing on Gender-based disparities, public space safety, and crime prevalence. It also addresses the issues that women confront in public spaces and recommends design approaches that prioritize safety and inclusivity. Research based on pragmatic paradigm emphasizes useful outcomes and the use of research in the real world to solve issues in an efficient way. The research topic "Designing Gender-Inclusive Urban Space: A Vision for Women's Safety CrossCountry Movement at Indo-Nepal Border Case of Biratnagar/Jogbani" seeks to generate actionable strategies and designs for gender-inclusive urban spaces in a specific socio-spatial context. Pragmatic focuses on solving real-world problems making it appropriate for gender safety issues. The pragmatic paradigm is the most appropriate for this subject because it strikes a balance between the necessity of comprehending women's experiences and the need to provide concrete policy and urban design solutions. Using the pragmatic paradigm, this study used a mixed-methods approach to investigate perceptions of safety for women and girls in a cross-border zone. Both surveys and interviews will be used because sexual harassment is a delicate topic. A thorough understanding of harassment experiences is provided by this mixed-methods approach, which enables the collection of quantitative data through structured questionnaires and qualitative insights through open-ended interviews. The findings aim to provide actionable strategies for policymakers and stakeholders to transform Indo-Nepal cross border into a model of gender-inclusive planning that empowers women and fosters equitable mobility across the border areas.Keywords: gender-inclusive urban spaces, women's safety, public space design, pragmatic paradigm
Procedia PDF Downloads 4370 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 148369 Tall Building Transit-Oriented Development (TB-TOD) and Energy Efficiency in Suburbia: Case Studies, Sydney, Toronto, and Washington D.C.
Authors: Narjes Abbasabadi
Abstract:
As the world continues to urbanize and suburbanize, where suburbanization associated with mass sprawl has been the dominant form of this expansion, sustainable development challenges will be more concerned. Sprawling, characterized by low density and automobile dependency, presents significant environmental issues regarding energy consumption and Co2 emissions. This paper examines the vertical expansion of suburbs integrated into mass transit nodes as a planning strategy for boosting density, intensification of land use, conversion of single family homes to multifamily dwellings or mixed use buildings and development of viable alternative transportation choices. It analyzes the spatial patterns of tall building transit-oriented development (TB-TOD) of suburban regions in Sydney (Australia), Toronto (Canada), and Washington D.C. (United States). The main objectives of this research seek to understand the effect of the new morphology of suburban tall, the physical dimensions of individual buildings and their arrangement at a larger scale with energy efficiency. This study aims to answer these questions: 1) why and how can the potential phenomenon of vertical expansion or high-rise development be integrated into suburb settings? 2) How can this phenomenon contribute to an overall denser development of suburbs? 3) Which spatial pattern or typologies/ sub-typologies of the TB-TOD model do have the greatest energy efficiency? It addresses these questions by focusing on 1) energy, heat energy demand (excluding cooling and lighting) related to design issues at two levels: macro, urban scale and micro, individual buildings—physical dimension, height, morphology, spatial pattern of tall buildings and their relationship with each other and transport infrastructure; 2) Examining TB-TOD to provide more evidence of how the model works regarding ridership. The findings of the research show that the TB-TOD model can be identified as the most appropriate spatial patterns of tall buildings in suburban settings. And among the TB-TOD typologies/ sub-typologies, compact tall building blocks can be the most energy efficient one. This model is associated with much lower energy demands in buildings at the neighborhood level as well as lower transport needs in an urban scale while detached suburban high rise or low rise suburban housing will have the lowest energy efficiency. The research methodology is based on quantitative study through applying the available literature and static data as well as mapping and visual documentations of urban regions such as Google Earth, Microsoft Bing Bird View and Streetview. It will examine each suburb within each city through the satellite imagery and explore the typologies/ sub-typologies which are morphologically distinct. The study quantifies heat energy efficiency of different spatial patterns through simulation via GIS software.Keywords: energy efficiency, spatial pattern, suburb, tall building transit-oriented development (TB-TOD)
Procedia PDF Downloads 261368 Mapping Potential Soil Salinization Using Rule Based Object Oriented Image Analysis
Authors: Zermina Q., Wasif Y., Naeem S., Urooj S., Sajid R. A.
Abstract:
Land degradation, a leading environemtnal problem and a decrease in the quality of land has become a major global issue, caused by human activities. By land degradation, more than half of the world’s drylands are affected. The worldwide scope of main saline soils is approximately 955 M ha, whereas inferior salinization affected approximately 77 M ha. In irrigated areas, a total of 58% of these soils is found. As most of the vegetation types requires fertile soil for their growth and quality production, salinity causes serious problem to the production of these vegetation types and agriculture demands. This research aims to identify the salt affected areas in the selected part of Indus Delta, Sindh province, Pakistan. This particular mangroves dominating coastal belt is important to the local community for their crop growth. Object based image analysis approach has been adopted on Landsat TM imagery of year 2011 by incorporating different mathematical band ratios, thermal radiance and salinity index. Accuracy assessment of developed salinity landcover map was performed using Erdas Imagine Accuracy Assessment Utility. Rain factor was also considered before acquiring satellite imagery and conducting field survey, as wet soil can greatly affect the condition of saline soil of the area. Dry season considered best for the remote sensing based observation and monitoring of the saline soil. These areas were trained with the ground truth data w.r.t pH and electric condutivity of the soil samples. The results were obtained from the object based image analysis of Keti bunder and Kharo chan shows most of the region under low saline soil.Total salt affected soil was measured to be 46,581.7 ha in Keti Bunder, which represents 57.81 % of the total area of 80,566.49 ha. High Saline Area was about 7,944.68 ha (9.86%). Medium Saline Area was about 17,937.26 ha (22.26 %) and low Saline Area was about 20,699.77 ha (25.69%). Where as total salt affected soil was measured to be 52,821.87 ha in Kharo Chann, which represents 55.87 % of the total area of 94,543.54 ha. High Saline Area was about 5,486.55 ha (5.80 %). Medium Saline Area was about 13,354.72 ha (14.13 %) and low Saline Area was about 33980.61 ha (35.94 %). These results show that the area is low to medium saline in nature. Accuracy of the soil salinity map was found to be 83 % with the Kappa co-efficient of 0.77. From this research, it was evident that this area as a whole falls under the category of low to medium saline area and being close to coastal area, mangrove forest can flourish. As Mangroves are salt tolerant plant so this area is consider heaven for mangrove plantation. It would ultimately benefit both the local community and the environment. Increase in mangrove forest control the problem of soil salinity and prevent sea water to intrude more into coastal area. So deforestation of mangrove should be regularly monitored.Keywords: indus delta, object based image analysis, soil salinity, thematic mapper
Procedia PDF Downloads 620367 Street Naming and Property Addressing Systems for New Development in Ghana: A Case Study of Nkawkaw in the Kwahu West Municipality
Authors: Jonathan Nii Laryea Ashong, Samuel Opare
Abstract:
Current sustainable cities debate focuses on the formidable problems for the Ghana’s largest urban and rural agglomerations, the majority of all urban dwellers continue to reside in far smaller urban settlements. It is estimated that by year 2030, almost all the Ghana’s population growth will likely be intense in urban areas including Nkawkaw in the Kwahu West Municipality of Ghana. Nkawkaw is situated on the road and former railway between Accra and Kumasi, and lies about halfway between these cities. It is also connected by road to Koforidua and Konongo. According to the 2013 census, Nkawkaw has a settlement population of 61,785. Many international agencies, government and private architectures’ are been asked to adequately recognize the naming of streets and property addressing system among the 170 districts across Ghana. The naming of streets and numbering of properties is to assist Metropolitan, Municipal and District Assemblies to manage the processes for establishing coherent address system nationally. Street addressing in the Nkawkaw in the Kwahu West Municipality which makes it possible to identify the location of a parcel of land, public places or dwellings on the ground based on system of names and numbers, yet agreement on how to progress towards it remains elusive. Therefore, reliable and effective development control for proper street naming and property addressing systems are required. The Intelligent Addressing (IA) technology from the UK is being used to name streets and properties in Ghana. The intelligent addressing employs the technique of unique property Reference Number and the unique street reference number which would transform national security and other service providers’ ability to respond rapidly to distress calls. Where name change is warranted following the review of existing streets names, the Physical Planning Department (PPDs) shall, in consultation with the relevant traditional authorities and community leadership (or relevant major stakeholders), select a street name in accordance with the provisions of the policy and the processes outlined for street name change for new development. In the case of existing streets with no names, the respective PPDs shall, in consultation with the relevant traditional authorities and community leadership (or relevant major stakeholders), select a street name in accordance with the requirements set out in municipality. Naming of access ways proposed for new developments shall be done at the time of developing sector layouts (subdivision maps) for the designated areas. In the case of private gated developments, the developer shall submit the names of the access ways as part of the plan and other documentation forwarded to the Municipal District Assembly for approval. The names shall be reviewed first by the PPD to avoid duplication and to ensure conformity to the required standards before submission to the Assembly’s Statutory Planning Committee for approval. The Kwahu West Municipality is supposed to be self-sustaining, providing basic services to inhabitants as a result of proper planning layouts, street naming and property addressing system that prevail in the area. The implications of these future projections are discussed.Keywords: Nkawkaw, Kwahu west municipality, street naming, property, addressing system
Procedia PDF Downloads 551366 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)
Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke
Abstract:
Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.Keywords: macro and micronutrients, tomato, SAS package, photosynthates
Procedia PDF Downloads 476365 Exploring Closed-Loop Business Systems Which Eliminates Solid Waste in the Textile and Fashion Industry: A Systematic Literature Review Covering the Developments Occurred in the Last Decade
Authors: Bukra Kalayci, Geraldine Brennan
Abstract:
Introduction: Over the last decade, a proliferation of literature related to textile and fashion business in the context of sustainable production and consumption has emerged. However, the economic and environmental benefits of solid waste recovery have not been comprehensively searched. Therefore at the end-of-life or end-of-use textile waste management remains a gap. Solid textile waste reuse and recycling principles of the circular economy need to be developed to close the disposal stage of the textile supply chain. The environmental problems associated with the over-production and –consumption of textile products arise. Together with growing population and fast fashion culture the share of solid textile waste in municipal waste is increasing. Focusing on post-consumer textile waste literature, this research explores the opportunities, obstacles and enablers or success factors associated with closed-loop textile business systems. Methodology: A systematic literature review was conducted in order to identify best practices and gaps from the existing body of knowledge related to closed-loop post-consumer textile waste initiatives over the last decade. Selected keywords namely: ‘cradle-to-cradle ‘, ‘circular* economy* ‘, ‘closed-loop* ‘, ‘end-of-life* ‘, ‘reverse* logistic* ‘, ‘take-back* ‘, ‘remanufacture* ‘, ‘upcycle* ‘ with the combination of (and) ‘fashion* ‘, ‘garment* ‘, ‘textile* ‘, ‘apparel* ‘, clothing* ‘ were used and the time frame of the review was set between 2005 to 2017. In order to obtain a broad coverage, Web of Knowledge and Science Direct databases were used, and peer-reviewed journal articles were chosen. The keyword search identified 299 number of papers which was further refined into 54 relevant papers that form the basis of the in-depth thematic analysis. Preliminary findings: A key finding was that the existing literature is predominantly conceptual rather than applied or empirical work. Moreover, the enablers or success factors, obstacles and opportunities to implement closed-loop systems in the textile industry were not clearly articulated and the following considerations were also largely overlooked in the literature. While the circular economy suggests multiple cycles of discarded products, components or materials, most research has to date tended to focus on a single cycle. Thus the calculations of environmental and economic benefits of closed-loop systems are limited to one cycle which does not adequately explore the feasibility or potential benefits of multiple cycles. Additionally, the time period textile products spend between point of sale, and end-of-use/end-of-life return is a crucial factor. Despite past efforts to study closed-loop textile systems a clear gap in the literature is the lack of a clear evaluation framework which enables manufacturers to clarify the reusability potential of textile products through consideration of indicators related too: quality, design, lifetime, length of time between manufacture and product return, volume of collected disposed products, material properties, and brand segment considerations (e.g. fast fashion versus luxury brands).Keywords: circular fashion, closed loop business, product service systems, solid textile waste elimination
Procedia PDF Downloads 204364 Monocoque Systems: The Reuniting of Divergent Agencies for Wood Construction
Authors: Bruce Wrightsman
Abstract:
Construction and design are inexorably linked. Traditional building methodologies, including those using wood, comprise a series of material layers differentiated and separated from each other. This results in the separation of two agencies of building envelope (skin) separate from the structure. However, from a material performance position reliant on additional materials, this is not an efficient strategy for the building. The merits of traditional platform framing are well known. However, its enormous effectiveness within wood-framed construction has seldom led to serious questioning and challenges in defining what it means to build. There are several downsides of using this method, which is less widely discussed. The first and perhaps biggest downside is waste. Second, its reliance on wood assemblies forming walls, floors and roofs conventionally nailed together through simple plate surfaces is structurally inefficient. It requires additional material through plates, blocking, nailers, etc., for stability that only adds to the material waste. In contrast, when we look back at the history of wood construction in airplane and boat manufacturing industries, we will see a significant transformation in the relationship of structure with skin. The history of boat construction transformed from indigenous wood practices of birch bark canoes to copper sheathing over wood to improve performance in the late 18th century and the evolution of merged assemblies that drives the industry today. In 1911, Swiss engineer Emile Ruchonnet designed the first wood monocoque structure for an airplane called the Cigare. The wing and tail assemblies consisted of thin, lightweight, and often fabric skin stretched tightly over a wood frame. This stressed skin has evolved into semi-monocoque construction, in which the skin merges with structural fins that take additional forces. It provides even greater strength with less material. The monocoque, which translates to ‘mono or single shell,’ is a structural system that supports loads and transfers them through an external enclosure system. They have largely existed outside the domain of architecture. However, this uniting of divergent systems has been demonstrated to be lighter, utilizing less material than traditional wood building practices. This paper will examine the role monocoque systems have played in the history of wood construction through lineage of boat and airplane building industries and its design potential for wood building systems in architecture through a case-study examination of a unique wood construction approach. The innovative approach uses a wood monocoque system comprised of interlocking small wood members to create thin shell assemblies for the walls, roof and floor, increasing structural efficiency and wasting less than 2% of the wood. The goal of the analysis is to expand the work of practice and the academy in order to foster deeper, more honest discourse regarding the limitations and impact of traditional wood framing.Keywords: wood building systems, material histories, monocoque systems, construction waste
Procedia PDF Downloads 79363 An Introduction to the Radiation-Thrust Based on Alpha Decay and Spontaneous Fission
Authors: Shiyi He, Yan Xia, Xiaoping Ouyang, Liang Chen, Zhongbing Zhang, Jinlu Ruan
Abstract:
As the key system of the spacecraft, various propelling system have been developing rapidly, including ion thrust, laser thrust, solar sail and other micro-thrusters. However, there still are some shortages in these systems. The ion thruster requires the high-voltage or magnetic field to accelerate, resulting in extra system, heavy quantity and large volume. The laser thrust now is mostly ground-based and providing pulse thrust, restraint by the station distribution and the capacity of laser. The thrust direction of solar sail is limited to its relative position with the Sun, so it is hard to propel toward the Sun or adjust in the shadow.In this paper, a novel nuclear thruster based on alpha decay and spontaneous fission is proposed and the principle of this radiation-thrust with alpha particle has been expounded. Radioactive materials with different released energy, such as 210Po with 5.4MeV and 238Pu with 5.29MeV, attached to a metal film will provides various thrust among 0.02-5uN/cm2. With this repulsive force, radiation is able to be a power source. With the advantages of low system quantity, high accuracy and long active time, the radiation thrust is promising in the field of space debris removal, orbit control of nano-satellite array and deep space exploration. To do further study, a formula lead to the amplitude and direction of thrust by the released energy and decay coefficient is set up. With the initial formula, the alpha radiation elements with the half life period longer than a hundred days are calculated and listed. As the alpha particles emit continuously, the residual charge in metal film grows and affects the emitting energy distribution of alpha particles. With the residual charge or extra electromagnetic field, the emitting of alpha particles performs differently and is analyzed in this paper. Furthermore, three more complex situations are discussed. Radiation element generating alpha particles with several energies in different intensity, mixture of various radiation elements, and cascaded alpha decay are studied respectively. In combined way, it is more efficient and flexible to adjust the thrust amplitude. The propelling model of the spontaneous fission is similar with the one of alpha decay, which has a more complex angular distribution. A new quasi-sphere space propelling system based on the radiation-thrust has been introduced, as well as the collecting and processing system of excess charge and reaction heat. The energy and spatial angular distribution of emitting alpha particles on unit area and certain propelling system have been studied. As the alpha particles are easily losing energy and self-absorb, the distribution is not the simple stacking of each nuclide. With the change of the amplitude and angel of radiation-thrust, orbital variation strategy on space debris removal is shown and optimized.Keywords: alpha decay, angular distribution, emitting energy, orbital variation, radiation-thruster
Procedia PDF Downloads 209362 Monitoring of Rice Phenology and Agricultural Practices from Sentinel 2 Images
Authors: D. Courault, L. Hossard, V. Demarez, E. Ndikumana, D. Ho Tong Minh, N. Baghdadi, F. Ruget
Abstract:
In the global change context, efficient management of the available resources has become one of the most important topics, particularly for sustainable crop development. Timely assessment with high precision is crucial for water resource and pest management. Rice cultivated in Southern France in the Camargue region must face a challenge, reduction of the soil salinity by flooding and at the same time reduce the number of herbicides impacting negatively the environment. This context has lead farmers to diversify crop rotation and their agricultural practices. The objective of this study was to evaluate this crop diversity both in crop systems and in agricultural practices applied to rice paddy in order to quantify the impact on the environment and on the crop production. The proposed method is based on the combined use of crop models and multispectral data acquired from the recent Sentinel 2 satellite sensors launched by the European Space Agency (ESA) within the homework of the Copernicus program. More than 40 images at fine spatial resolution (10m in the optical range) were processed for 2016 and 2017 (with a revisit time of 5 days) to map crop types using random forest method and to estimate biophysical variables (LAI) retrieved by inversion of the PROSAIL canopy radiative transfer model. Thanks to the high revisit time of Sentinel 2 data, it was possible to monitor the soil labor before flooding and the second sowing made by some farmers to better control weeds. The temporal trajectories of remote sensing data were analyzed for various rice cultivars for defining the main parameters describing the phenological stages useful to calibrate two crop models (STICS and SAFY). Results were compared to surveys conducted with 10 farms. A large variability of LAI has been observed at farm scale (up to 2-3m²/m²) which induced a significant variability in the yields simulated (up to 2 ton/ha). Observations on more than 300 fields have also been collected on land use. Various maps were elaborated, land use, LAI, flooding and sowing, and harvest dates. All these maps allow proposing a new typology to classify these paddy crop systems. Key phenological dates can be estimated from inverse procedures and were validated against ground surveys. The proposed approach allowed to compare the years and to detect anomalies. The methods proposed here can be applied at different crops in various contexts and confirm the potential of remote sensing acquired at fine resolution such as the Sentinel2 system for agriculture applications and environment monitoring. This study was supported by the French national center of spatial studies (CNES, funded by the TOSCA).Keywords: agricultural practices, remote sensing, rice, yield
Procedia PDF Downloads 275361 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta
Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi
Abstract:
Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin
Procedia PDF Downloads 117360 A Review on Cyberchondria Based on Bibliometric Analysis
Authors: Xiaoqing Peng, Aijing Luo, Yang Chen
Abstract:
Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches
Procedia PDF Downloads 124359 Various Shaped ZnO and ZnO/Graphene Oxide Nanocomposites and Their Use in Water Splitting Reaction
Authors: Sundaram Chandrasekaran, Seung Hyun Hur
Abstract:
Exploring strategies for oxygen vacancy engineering under mild conditions and understanding the relationship between dislocations and photoelectrochemical (PEC) cell performance are challenging issues for designing high performance PEC devices. Therefore, it is very important to understand that how the oxygen vacancies (VO) or other defect states affect the performance of the photocatalyst in photoelectric transfer. So far, it has been found that defects in nano or micro crystals can have two possible significances on the PEC performance. Firstly, an electron-hole pair produced at the interface of photoelectrode and electrolyte can recombine at the defect centers under illumination of light, thereby reducing the PEC performances. On the other hand, the defects could lead to a higher light absorption in the longer wavelength region and may act as energy centers for the water splitting reaction that can improve the PEC performances. Even if the dislocation growth of ZnO has been verified by the full density functional theory (DFT) calculations and local density approximation calculations (LDA), it requires further studies to correlate the structures of ZnO and PEC performances. Exploring the hybrid structures composed of graphene oxide (GO) and ZnO nanostructures offer not only the vision of how the complex structure form from a simple starting materials but also the tools to improve PEC performances by understanding the underlying mechanisms of mutual interactions. As there are few studies for the ZnO growth with other materials and the growth mechanism in those cases has not been clearly explored yet, it is very important to understand the fundamental growth process of nanomaterials with the specific materials, so that rational and controllable syntheses of efficient ZnO-based hybrid materials can be designed to prepare nanostructures that can exhibit significant PEC performances. Herein, we fabricated various ZnO nanostructures such as hollow sphere, bucky bowl, nanorod and triangle, investigated their pH dependent growth mechanism, and correlated the PEC performances with them. Especially, the origin of well-controlled dislocation-driven growth and its transformation mechanism of ZnO nanorods to triangles on the GO surface were discussed in detail. Surprisingly, the addition of GO during the synthesis process not only tunes the morphology of ZnO nanocrystals and also creates more oxygen vacancies (oxygen defects) in the lattice of ZnO, which obviously suggest that the oxygen vacancies be created by the redox reaction between GO and ZnO in which the surface oxygen is extracted from the surface of ZnO by the functional groups of GO. On the basis of our experimental and theoretical analysis, the detailed mechanism for the formation of specific structural shapes and oxygen vacancies via dislocation, and its impact in PEC performances are explored. In water splitting performance, the maximum photocurrent density of GO-ZnO triangles was 1.517mA/cm-2 (under UV light ~ 360 nm) vs. RHE with high incident photon to current conversion Efficiency (IPCE) of 10.41%, which is the highest among all samples fabricated in this study and also one of the highest IPCE reported so far obtained from GO-ZnO triangular shaped photocatalyst.Keywords: dislocation driven growth, zinc oxide, graphene oxide, water splitting
Procedia PDF Downloads 296358 Provotyping Futures Through Design
Authors: Elisabetta Cianfanelli, Maria Claudia Coppola, Margherita Tufarelli
Abstract:
Design practices throughout history return a critical understanding of society since they always conveyed values and meanings aimed at (re)framing reality by acting in everyday life: here, design gains cultural and normative character, since its artifacts, services, and environments hold the power to intercept, influence and inspire thoughts, behaviors, and relationships. In this sense, design can be persuasive, engaging in the production of worlds and, as such, acting in the space between poietics and politics so that chasing preferable futures and their aesthetic strategies becomes a matter full of political responsibility. This resonates with contemporary landscapes of radical interdependencies challenging designers to focus on complex socio-technical systems and to better support values such as equality and justice for both humans and nonhumans. In fact, it is in times of crisis and structural uncertainty that designers turn into visionaries at the service of society, envisioning scenarios and dwelling in the territories of imagination to conceive new fictions and frictions to be added to the thickness of the real. Here, design’s main tasks are to develop options, to increase the variety of choices, to cultivate its role as scout, jester, agent provocateur for the public, so that design for transformation emerges, making an explicit commitment to society, furthering structural change in a proactive and synergic manner. However, the exploration of possible futures is both a trap and a trampoline because, although it embodies a radical research tool, it raises various challenges when the design process goes further in the translation of such vision into an artefact - whether tangible or intangible -, through which it should deliver that bit of future into everyday experience. Today designers are making up new tools and practices to tackle current wicked challenges, combining their approaches with other disciplinary domains: futuring through design, thus, rises from research strands like speculative design, design fiction, and critical design, where the blending of design approaches and futures thinking brings an action-oriented and product-based approach to strategic insights. The contribution positions at the intersection of those approaches, aiming at discussing design’s tools of inquiry through which it is possible to grasp the agency of imagined futures into present time. Since futures are not remote, they actively participate in creating path-dependent decisions, crystallized into designed artifacts par excellence, prototypes, and their conceptual other, provotypes: with both being unfinished and multifaceted, the first ones are effective in reiterating solutions to problems already framed, while the second ones prove to be useful when the goal is to explore and break boundaries, bringing closer preferable futures. By focusing on some provotypes throughout history which challenged markets and, above all, social and cultural structures, the contribution’s final aim is understanding the knowledge produced by provotypes, understood as design spaces where designs’s humanistic side might help developing a deeper sensibility about uncertainty and, most of all, the unfinished feature of societal artifacts, whose experimentation would leave marks and traces to build up f(r)ictions as vital sparks of plurality and collective life.Keywords: speculative design, provotypes, design knowledge, political theory
Procedia PDF Downloads 135357 Study Protocol: Impact of a Sustained Health Promoting Workplace on Stock Price Performance and Beta - A Singapore Case
Authors: Wee Tong Liaw, Elaine Wong Yee Sing
Abstract:
Since 2001, many companies in Singapore have voluntarily participated in the bi-annual Singapore HEALTH Award initiated by the Health Promotion Board of Singapore (HPB). The Singapore HEALTH Award (SHA), is an industry wide award and assessment process. SHA assesses and recognizes employers in Singapore for implementing a comprehensive and sustainable health promotion programme at their workplaces. The rationale for implementing a sustained health promoting workplace and participating in SHA is obvious when company management is convinced that healthier employees, business productivity, and profitability are positively correlated. However, performing research or empirical studies on the impact of a sustained health promoting workplace on stock returns are not likely to yield any interests in the absence of a systematic and independent assessment on the comprehensiveness and sustainability of a health promoting workplace in most developed economies. The principles of diversification and mean-variance efficient portfolio in Modern Portfolio Theory developed by Markowitz (1952) laid the foundation for the works of many financial economists and researchers, and among others, the development of the Capital Asset Pricing Model from the work of Sharpe (1964), Lintner (1965) and Mossin (1966), and the Fama-French Three-Factor Model of Fama and French (1992). This research seeks to support the rationale by studying whether there is a significant relationship or impact of a sustained health promoting workplace on the performance of companies listed on the SGX. The research shall form and test hypotheses pertaining to the impact of a sustained health promoting workplace on company’s performances, including stock returns, of companies that participated in the SHA and companies that did not participate in the SHA. In doing so, the research would be able to determine whether corporate and fund manager should consider the significance of a sustained health promoting workplace as a risk factor to explain the stock returns of companies listed on the SGX. With respect to Singapore’s stock market, this research will test the significance and relevance of a health promoting workplace using the Singapore Health Award as a proxy for non-diversifiable risk factor to explain stock returns. This study will examine the significance of a health promoting workplace on a company’s performance and study its impact on stock price performance and beta and examine if it has higher explanatory power than the traditional single factor asset pricing model CAPM (Capital Asset Pricing Model). To study the significance there are three key questions pertinent to the research study. I) Given a choice, would an investor be better off investing in a listed company with a sustained health promoting workplace i.e. a Singapore Health Award’s recipient? II) The Singapore Health Award has four levels of award starting from Bronze, Silver, Gold to Platinum. Would an investor be indifferent to the level of award when investing in a listed company who is a Singapore Health Award’s recipient? III) Would an asset pricing model combining FAMA-French Three Factor Model and ‘Singapore Health Award’ factor be more accurate than single factor Capital Asset Pricing Model and the Three Factor Model itself?Keywords: asset pricing model, company's performance, stock prices, sustained health promoting workplace
Procedia PDF Downloads 370356 Management Potentialities Of Rice Blast Disease Caused By Magnaporthe Grisae Using New Nanofungicides Derived From Chitosan
Authors: Abdulaziz Bashir Kutawa, Khairulmazmi Ahmad, Mohd Zobir Hussein, Asgar Ali, Mohd Aswad Abdul Wahab, Amara Rafi, Mahesh Tiran Gunasena, Muhammad Ziaur Rahman, Md Imam Hossain, Syazwan Afif Mohd Zobir
Abstract:
Various abiotic and biotic stresses have an impact on rice production all around the world. The most serious and prevalent disease in rice plants, known as rice blast, is one of the major obstacles to the production of rice. It is one of the diseases that has the greatest negative effects on rice farming globally, the disease is caused by a fungus called Magnaporthe grisae. Since nanoparticles were shown to have an inhibitory impact on certain types of fungus, nanotechnology is a novel notion to enhance agriculture by battling plant diseases. Utilizing nanocarrier systems enables the active chemicals to be absorbed, attached, and encapsulated to produce efficient nanodelivery formulations. The objectives of this research work were to determine the efficacy and mode of action of the nanofungicides (in-vitro) and in field conditions (in-vivo). Ionic gelation method was used in the development of the nanofungicides. Using the poisoned media method, the synthesized agronanofungicides' in-vitro antifungal activity was assessed against M. grisae. The potato dextrose agar (PDA) was amended in several concentrations; 0.001, 0.005, 0.01, 0.025, 0.05, 0.1, 0.15, 0.20, 0.25, 0.30, and 0.35 ppm for the nanofungicides. Medium with the only solvent served as a control. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Every day, mycelial growth was measured, and PIRG (percentage inhibition of radial growth) was also computed. Based on the results of the zone of inhibition, the chitosan-hexaconazole agronanofungicide (2g/mL) was the most effective fungicide to inhibit the growth of the fungus with 100% inhibition at 0.2, 0.25, 0.30, and 0.35 ppm, respectively. Then followed by carbendazim analytical fungicide that inhibited the growth of the fungus (100%) at 5, 10, 25, 50, and 100 ppm, respectively. The least were found to be propiconazole and basamid fungicides with 100% inhibition only at 100 ppm. The scanning electron microscope (SEM), confocal laser scanning microscope (CLSM), and transmission electron microscope (TEM) were used to study the mechanisms of action of the M. grisae fungal cells. The results showed that both carbendazim, chitosan-hexaconazole, and HXE were found to be the most effective fungicides in disrupting the mycelia of the fungus, and internal structures of the fungal cells. The results of the field assessment showed that the CHDEN treatment (5g/L, double dosage) was found to be the most effective fungicide to reduce the intensity of the rice blast disease with DSI of 17.56%, lesion length (0.43 cm), DR of 82.44%, AUDPC of 260.54 Unit2, and PI of 65.33%, respectively. The least treatment was found to be chitosan-hexaconazole-dazomet (2.5g/L, MIC). The usage of CHDEN and CHEN nanofungicides will significantly assist in lessening the severity of rice blast in the fields, increasing output and profit for rice farmers.Keywords: chitosan, hexaconazole, disease incidence, and magnaporthe grisae
Procedia PDF Downloads 70355 Improvement of Oxidative Stability of Edible Oil by Microencapsulation Using Plant Proteins
Authors: L. Le Priol, A. Nesterenko, K. El Kirat, K. Saleh
Abstract:
Introduction and objectives: Polyunsaturated fatty acids (PUFAs) omega-3 and omega-6 are widely recognized as being beneficial to the health and normal growth. Unfortunately, due to their highly unsaturated nature, these molecules are sensitive to oxidation and thermic degradation leading to the production of toxic compounds and unpleasant flavors and smells. Hence, it is necessary to find out a suitable way to protect them. Microencapsulation by spray-drying is a low-cost encapsulation technology and most commonly used in the food industry. Many compounds can be used as wall materials, but there is a growing interest in the use of biopolymers, such as proteins and polysaccharides, over the last years. The objective of this study is to increase the oxidative stability of sunflower oil by microencapsulation in plant protein matrices using spray-drying technique. Material and methods: Sunflower oil was used as a model substance for oxidable food oils. Proteins from brown rice, hemp, pea, soy and sunflower seeds were used as emulsifiers and microencapsulation wall materials. First, the proteins were solubilized in distilled water. Then, the emulsions were pre-homogenized using a high-speed homogenizer (Ultra-Turrax) and stabilized by using a high-pressure homogenizer (HHP). Drying of the emulsion was performed in a Mini Spray Dryer. The oxidative stability of the encapsulated oil was determined by performing accelerated oxidation tests with a Rancimat. The size of the microparticles was measured using a laser diffraction analyzer. The morphology of the spray-dried microparticles was acquired using environmental scanning microscopy. Results: Pure sunflower oil was used as a reference material. Its induction time was 9.5 ± 0.1 h. The microencapsulation of sunflower oil in pea and soy protein matrices significantly improved its oxidative stability with induction times of 21.3 ± 0.4 h and 12.5 ± 0.4 h respectively. The encapsulation with hemp proteins did not significantly change the oxidative stability of the encapsulated oil. Sunflower and brown rice proteins were ineffective materials for this application, with induction times of 7.2 ± 0.2 h and 7.0 ± 0.1 h respectively. The volume mean diameter of the microparticles formulated with soy and pea proteins were 8.9 ± 0.1 µm and 16.3 ± 1.2 µm respectively. The values for hemp, sunflower and brown rice proteins could not be obtained due to the agglomeration of the microparticles. ESEM images showed smooth and round microparticles with soy and pea proteins. The surfaces of the microparticles obtained with sunflower and hemp proteins were porous. The surface was rough when brown rice proteins were used as the encapsulating agent. Conclusion: Soy and pea proteins appeared to be efficient wall materials for the microencapsulation of sunflower oil by spray drying. These results were partly explained by the higher solubility of soy and pea proteins in water compared to hemp, sunflower, and brown rice proteins. Acknowledgment: This work has been performed, in partnership with the SAS PIVERT, within the frame of the French Institute for the Energy Transition (Institut pour la Transition Energétique (ITE)) P.I.V.E.R.T. (www.institut-pivert.com) selected as an Investments for the Future (Investissements d’Avenir). This work was supported, as part of the Investments for the Future, by the French Government under the reference ANR-001-01.Keywords: biopolymer, edible oil, microencapsulation, oxidative stability, release, spray-drying
Procedia PDF Downloads 137